Sunday, 4 June 2017

Secure Coding Practices for Microsoft .NET Applications

1.0 Overview
As .NET applications explode with Web services adoption, security plays a critical role in the implementation of business operations based on these new technologies. This paper will detail the tenants of secure coding specific to .NET applications and give specific suggestions for the Visual Studio .NET environment. Specifically, this paper will focus on five common ASP.NET application security flaws, and recommendations for delivering higher quality applications.


2.0    The tenants of secure coding

2.1    Distrust relationship
The primal sin of all web applications is their tendency to trust user input. It is assumed that since browsers are used to interact with the site, then users – good and bad - are bound by the browser, and can only send data from the browser. This is obviously not true. It is amazingly easy to send any kind of data to the application. In fact, hackers have a rich toolkit of programs whose sole purpose is to provide a means to interact and attack sites outside the boundaries of the browser. From the lowest raw line- mode interface (e.g. telnet), through CGI scanners, web proxies, and web application scanners, attackers have a diverse spectrum of possible attack modes and means.

The only way to counteract the plethora of attack directions, techniques, and tools is to validate user input. Always, all input, all of the time, again and again. Here are some guidelines:

1.      Assume nothing on user input

2.      Formulate your validation criteria for all user input
3.      Enforce the validation criteria on all user input

4.      Validate the data on a trusted machine (the server)
5.      Trust only what you validated

6.      Use multiple-tier validations

Notes:

·          Regarding guideline number 4, it goes without saying that the validation should take place on the server, a trusted platform, as opposed to on the client /browser which cannot be trusted. Client side JavaScript code that validates user input prior to submitting is a nice idea as far as performance and user experience, but from a security point of view, it’s meaningless or even worse – it may provide a false sense of security. Anything that runs at the client side can be fooled, and it’s especially easy to do so with Javascript code.

·          Regarding guideline number 6, it makes sense to perform several, perhaps overlapping validations. For instance, a program may validate all input upon receiving it to make sure it consists of valid characters, and that no field is too long (potential buffer overflow). Some routines may then carry out further validations, making sure that the data is reasonable and valid for the specific purposes it will be used for. A more fine-grained character set validation may be applied, as well as length restriction enforcement.



Secure Coding Practices for Microsoft.Net Applications
2

ã 2003 Sanctum, Inc. www.Sanctuminc.com



2.2    Positive Thinking

The second tenant of secure coding is to formulate the validation in a positive manner. That is, to provide positive security, rather than negative security. Positive security means that only data known to be good is allowed into the system. Unknown, unrecognized or evil data is rejected. Negative security means that only data known to be evil is rejected, while other data, including unrecognized or unknown is allowed.

For example, an input field consisting of user name can be checked for characters that are allowed to be in a user name (e.g. alphanumeric characters) – this provides positive security. On the other hand, the input field can be checked for hazardous characters such as an apostrophe, or for forbidden patterns such as double hyphen– this provides negative security.


2.3    Comparison between positive security and negative security


Positive Security
Negative Security
Definition
All data allowed into the
All data disallowed into

system
the system
Typical implementation
Allowed value list,
Forbidden patterns,

allowed characters
forbidden characters
Example - allowing valid
[a-zA- Z0-9]{1,20}\.html
\.\.\\|\\\.\.|\.\./|/\.\.
file name (or blocking

(block the patterns “..\”,
malicious file names via a

“\..”, “../”, “/..”)
pattern)


Security
High – only valid data is
Low – are all the

allowed
hazardous characters


listed?


How can one be sure that


the patterns suffice, and


cannot be smartly


bypassed?
Functionality
Relatively prone to
Less prone to blocking

blocking valid data
allowed data (although this


can still happen, when


patterns or forbidden


characters are too broadly


defined)

Obviously, when achieved, positive security, is superior to negative security, and should be used whenever possible.





Secure Coding Practices for Microsoft.Net Applications
3

ã 2003 Sanctum, Inc. www.Sanctuminc.com






3.0     “What’s wrong with this picture?”: Five common ASP.NET security flaws and suggested coding recommendations

3.1        Parameter tampering and ASP.NET field validators
a.    The problem – parameter tampering

Trusting user input is the number one enemy of web application security. But how does this flaw appear in real life?

The major source for user input in a web application is the parameters submitted in HTML forms. Failing to validate these parameters may result in a severe security hole.

a.      Flawed code (C# querying a backend Microsoft SQL server, assuming the variables “user” and “password” are taken as-is from the user input)

SqlDataAdapter my_query = new SqlDataAdapter(

"SELECT * FROM accounts WHERE acc_user='" + user +

"' AND acc_password='" + password + '"', the_connection);

Note:

·          The code and examples throughout this paper work for MS-SQL servers, though the ideas are relevant practically to all database servers.

b.      The result

While this looks relatively innocent, it in fact opens the gate to a most vicious SQL injection attack. By choosing the input field “user” to be ' OR 1=1-- the attacker can probably log- in into the system as an arbitrary user. A refinement of this is (assuming the attacker knows that the super-user’s user name is “admin”) to inject the data admin' -- as the user field, in which case the attacker will be logged in as the super-user. And finally, it may be possible to execute shell commands, simply by appending the appropriate call right after the query, as in

'; EXEC master..xp_cmdshell('shell command here')--

What’s going on here? The programmer assumed that the user input consists of solely “normal” data – real user names, real passwords. These usually do not contain the character ' (apostrophe), which happens to play a major role on SQL’s syntax. Therefore, there’s no harm in generating the SQL query using valid data. But if the data is invalid and contains unexpected characters, such as ', then the query generated is not the query the programmer intended to execute, and therein lies the attack.


Secure Coding Practices for Microsoft.Net Applications
4

ã 2003 Sanctum, Inc. www.Sanctuminc.com



"[a-zA-Z0-9]*"
d.    The solution: ASP.NET validators

Perhaps the most important contribution to ASP.NET’s web application security is the introduction of field validators. A field validator, as the name hints, is a mechanism that enables the ASP.NET programmer to enforce some restrictions on the field value, thereby validating the field.

There are several types of field validators. In this case, we can use a regular expression validator (i.e. we use a validator that enforces that the user input field matches a given regular expression). In order to block the attack shown above, we need to forbid the apostrophe character, thus taking the negative security approach

"[^']*". Better yet, we can formulate a regular expression that allows only alphanumeric characters for this field (thus taking the positive security approach)
                                         .


By incorporating and correctly using the field validator mechanism, the developer can programmatically secure all input fields of the application against attacks such as cross site scripting and SQL injection.

Further Reading:

·          “User Input Validation in ASP.NET” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspp/html/pdc_userinput.asp

·          “Web Forms Validation” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vbcon/html/vboriWebFormsValidation.asp

·          “ASP.NET validation in depth” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspp/html/aspplusvalid.asp



3.2        Parameter tampering revisited - avoid validator pitfalls (and a note about information exposure)

a.  The problem – parameter tampering (take II)

After reading the above section about ASP.NET field validators, you incorporate validators for every user input field. While you feel you should be safe from parameter tampering, sadly, you are not. How come? There are several pitfalls to the implementation of field validators; here are the important ones:

The first example demonstrates the importance of understanding the processing flow of ASP.NET pages with respect to field validators and error handling.




Secure Coding Practices for Microsoft.Net Applications
5

ã 2003 Sanctum, Inc. www.Sanctuminc.com



b.    Flawed Code #1

<%@ Page Language="vb" %>

<form method="post" runat="server" ID="Form1"> Please Login<br>

User Name:

<asp:Textbox ID="user" runat="server"/><br> <asp:RegularExpressionValidator
ControlToValidate="user"

ValidationExpression= "[a-zA-Z0-9]{1,10}" runat="server" />
Password:
<asp:Textbox ID="pass" runat="server"/><br> <asp:RegularExpressionValidator
ControlToValidate="pass"
ValidationExpression= "[a-zA-Z0-9]{1,10}" runat="server" />

<asp:Button id="cmdSubmit" runat="server" Text="Submit!" OnClick="do_login"></asp:Button>
</form>

<script runat="server">

Sub do_login(sender As Object, e As System.EventArgs) ' I'm validated, so let's query the database
End Sub </script>

c.    Result

The hacker can ignore the whole security mechanism - the character set validation code, since it does not actually affect the flow of the code. The hacker can, therefore, run SQL injection attacks just as described above.

d.    Solution: Field validators must be explicitly checked

It is not enough to just define a validator for the field in question. Doing so indeed results in an error message in the HTML sent to the client, as well as the whole page being rendered, and processing not stopping once the validator failed. The right approach is to explicitly verify that the validator returned a positive result (logical “true”) before proceeding with processing the page and executing sensitive transactions. Verification of the validation can be done per validator, by querying the IsValid property of the validator. Alternatively, the logical AND of all validators is represented by the page property IsValid, which may be queried to get the success of all validators together.




Secure Coding Practices for Microsoft.Net Applications
6

ã 2003 Sanctum, Inc. www.Sanctuminc.com



A secure version of the above code would be:

<%@ Page Language="vb" %>

<form method="post" runat="server" ID="Form1"> Please Login<br>

User Name:

<asp:Textbox ID="user" runat="server"/><br> <asp:RegularExpressionValidator
ControlToValidate="user"

ValidationExpression= "[a-zA-Z0-9]{1,10}" runat="server" />
Password:
<asp:Textbox ID="pass" runat="server"/><br> <asp:RegularExpressionValidator
ControlToValidate="pass"
ValidationExpression= "[a-zA-Z0-9]{1,10}" runat="server" />

<asp:Button id="cmdSubmit" runat="server" Text="Submit!" OnClick="do_login"></asp:Button>
</form>

<script runat="server">

Sub do_login(sender As Object, e As System.EventArgs) If Page.IsValid Then
' I'm validated, so let's query the database
Else

… error handing
End If End Sub </script>

Further reading:

·          “Testing Validity Programmatically for Asp.NET Server Controls” http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vbcon/html/vbtsktestingvalidityprogrammatically.asp

The second example is about the correct syntax and usage of the RangeValidator.

e.    Flawed Code #2

<!-- check for a number 1-9 -->

<asp:RangeValidator … MinimumValue="1" MaximumValue="9" …/>







Secure Coding Practices for Microsoft.Net Applications
7

ã 2003 Sanctum, Inc. www.Sanctuminc.com



f.     The result

The hacker can actually enter any positive number to the application (e.g. “123”), as well as some non-numeric data (e.g. “0abcd”). The application may enter an undefined state.

g.   The solution: Range validation should specify the correct data type

When using the range validator ASP.NET control, it is important to keep in mind that the Type attribute must be set according to the type of input field expected. The Type attribute defaults to “String”. This has a nasty consequence if the developer forgets about it or is unaware to it, as we saw in the above flawed code. Since no Type is specified, ASP.NET assumed “String”, meaning that the order is a lexicographical one. Therefore, the validator will only ensure that the string starts with 0-9. Strings such as “0abcd” will be accepted.

The right way to test for integer range is to specify the type as “Integer”, e.g.:

<!-- check for a number 1-9 -->

<asp:RangeValidator … MinimumValue="1" MaximumValue="9" Type="Integer" … />

Further reading:

·         “RangeValidator Control” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpgenref/html/cpconrangevalidatorcontrol.asp


The third example is about an easy to miss shortcoming of performing client side verification:

h.   Flawed code #3

<asp:RegularExpressionValidator

ControlToValidate="user"

ValidationExpression= "Jim|Joe|Charlie|Admin|System|Frank" …
/>


i.     The result

The attacker gains valuable information – the names of the admin accounts. While this may not be useful for this page (after all, this particular value is allowed), it may be of use in other pages.




Secure Coding Practices for Microsoft.Net Applications
8

ã 2003 Sanctum, Inc. www.Sanctuminc.com



This happens since by default, the validator code is executed both at the client side and at the server side. The client side code provides good performance since there is no need for the request to be sent to the server and the response to be returned, and a good user experience with immediate validation before the data is actually sent. The server side code provides security (validation on a trusted machine). The downside to this scheme is that the security validation parameters are exposed to the client, since the same validation is run there. In some cases, this has a negative overall effect.

For example, a system that is designed to let in only certain users through its login page may have a regular expression validator for the user name such as “Jim|Joe|Charlie|Admin|System|Frank”. This is definitely the best one can get along the lines of positive security (only the designated 6 usernames are valid), however, since by default the validation is also performed at the client side, this information will be found in the HTML page presented to the client. And consequently, the client may be able to reverse engineer the validator, and learn the name of the (only) 6 valid accounts.

j.     The solution

Either disable validating at the client side for validators that may expose sensitive information (this can be done by setting the EnableClientScript property of the validator control to “false”), and/or validate this data using a different mechanism.

The below secure code takes the first approach – validation is carried out at the server side only:

<asp:RegularExpressionValidator

ControlToValidate="user"
ValidationExpression=

"Jim|Joe|Charlie|Admin|System|Frank" EnableClientScript="False" …
/>

Further reading

·          “Disabling Client Side Validation” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vbcon/html/vbtskdisablingvalidationatruntime.asp













Secure Coding Practices for Microsoft.Net Applications
9

ã 2003 Sanctum, Inc. www.Sanctuminc.com





3.3    Information leakage: Remember that __VIEWSTATE data can be viewed

a.    The problem: information about the application internals leaks out

An often overlooked source for information about ASP.NET application is the __VIEWSTATE hidden field, which can be found in almost all HTML pages. This hidden field is overlooked because it is Base64 encoded, which makes it look like an innocent string of alphanumeric characters (actually, forward slash, plus sign and equal sign are also part of the Base64 character set).

b.    Flawed code (the web.config configuration file)

<configuration>


<system.web>
… (no <machineKey> element) </system.web>
</configuration>

c.    Result

The __VIEWSTATE’s Base64 encoding can be easily decoded, and the __VIEWSTATE data can be exposed with minimal effort. Now the attacker can see the information that may be sensitive, such as internal state data of the application.

By default, the __VIEWSTATE data consists of:

·         Dynamic data from page controls

·         Data stored explicitly by the developer in the ViewState bag

·         Cryptographic signature of the above data

The first two data items appear in the clear, and as such provide an attacker with information about the application. The third item, the cryptographic signature, ensures that the data cannot be tampered with, yet the data itself is not encrypted.

d.    The solution: encrypt the __VIEWSTATE data

<configuration>

<system.web>

<machineKey validation="3DES"/>

</system.web>
</configuration>

Secure Coding Practices for Microsoft.Net Applications
10

ã 2003 Sanctum, Inc. www.Sanctuminc.com



Further reading:

·         “Taking a Bite Out of ASP.NET ViewState” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnaspnet/html/asp11222001.asp


3.4        SQL injection - Use SQL parameters to prevent SQL injection


a.      The problem: SQL injection

The problem was described in the section “parameter tampering” above. Reminder: an SQL query was formed by the script by embedding user input. A malicious character (apostrophe), when placed in the input field, caused the SQL server to execute a query totally different than the one intended.


b.    Flawed code

SqlDataAdapter my_query = new SqlDataAdapter(

"SELECT * FROM accounts WHERE acc_user='" + user + "' AND acc_password='" + password + '"', the_connection);

c.    The result

Just like the first example, by inserting the apostrophe character, an attacker can completely change the meaning of the SQL query. Consequently, an attacker can shape his/her own query, run different additional queries, and possibly execute SQL commands, which may compromise the server.

d.    The solution

The obvious solution is to allow only the characters that are really needed. But what if apostrophe is in fact needed? In some cases, an apostrophe can be part of a person’s name, or part of a perfectly valid English sentence.

The more robust approach to SQL injection prevention is the use of SQL parameters API (such as provided by ADO.NET) in order to have the programming infrastructure, and not the programmer, construct the query.

Using such an API, the programmer needs to provide a template query or a stored procedure, and a list of parameter values. These parameters are then securely embedded into the query and the result is executed by the SQL server. The advantage is in the process of embedding the parameters by the infrastructure, since it is guaranteed that the parameters will be embedded correctly. For example, apostrophes will be escaped, thus rendering SQL injection attacks useless.

Secure Coding Practices for Microsoft.Net Applications
11

ã 2003 Sanctum, Inc. www.Sanctuminc.com



So instead of the code in the “parameter tampering” section, use:

SqlDataAdapter my_query = new SqlDataAdapter(

"SELECT * FROM accounts WHERE acc_user= @user AND acc_password=@pass", the_connection);

SqlParameter userParam = my_query.SelectCommand.Parameters.Add( "@user",SqlDbType.VarChar,20);
userParam.Value=user;

SqlParameter passwordParam = my_query.SelectCommand.Parameters.Add( "@pass",SqlDbType.VarChar,20);

passwordParam.Value=password;

This ensures that the apostrophe character is properly escaped, and will not jeopardize the application or the SQL database. At the same time, the apostrophe will not be blocked, which is an upside of this approach.

Further reading:

·          “Data Access Security” (see the section “SQL Injection Attacks”) http://msdn.microsoft.com/library/en-us/dnnetsec/html/SecNetch12.asp?frame=true#sqlinjectionattacks

·          “Building SQL Statements Securely” - http://msdn.microsoft.com/library/default.asp?url=/library/en-us/csvr2002/htm/cs_se_securecode_pajt.asp


3.5         Cross Site Scripting (insecure composition of HTML pages) – HTML encode outgoing data

a.      The problem: Cross Site Scripting

An application vulnerable to cross-site scripting is one that embeds malicious user input to the response (HTML) page. To learn more about cross-site scripting attacks, it is suggested that you read the paper “Cross Site Scripting Explained”at www.sanctuminc.com/pdf/WhitePaper_CSS_Explained.pdf















Secure Coding Practices for Microsoft.Net Applications
12

ã 2003 Sanctum, Inc. www.Sanctuminc.com



b.    Flawed code

<%@ Page Language="vb" %>

<asp:Label id="Label1" runat="server">INITIAL LABEL VALUE</asp:Label>

<form method="post" runat="server" ID="Form1"> Please Provide feedback<br>

<asp:Textbox ID="feedback" runat="server"/><br> <asp:Button id="cmdSubmit" runat="server" Text="Submit!"
OnClick="do_feedback"></asp:Button> </form>

<script runat="server">

Sub do_feedback(sender As Object, e As System.EventArgs) Label1.Text=feedback.Text
End Sub </script>

c.    The result

An attacker can form a malicious request with JavaScript code that will get executed at the client browser whe n the link is clicked. To see that this is possible, the above script can be fed with the following input:

<script>alert(document.cookie)</script>


d.      The solution: HTML-encode user data that is sent back in the HTML response

On top of user input validation (in this case, does a normal user have to use the less-than symbol and the greater-than symbol? Perhaps these can be considered invalid characters), the classic solution to this problem is to HTML-encode outgoing user data. HTML-encoding of the data presented in the HTML page ensures that this data is not interpreted (by the browser) as anything other than plain text. Thus, the script injection attack is completely de-fanged.

In the above case, this maps simply to adding a function call to HtmlEncode in one place:
Label1.Text=Server.HtmlEncode(feedback.Text)


As a result, the response HTML stream will contain:

&lt;script&gt;alert(document.cookie)&lt;/script&gt;




Secure Coding Practices for Microsoft.Net Applications
13

ã 2003 Sanctum, Inc. www.Sanctuminc.com


Which is indeed harmless – no Javascript code is executed by the browser, because no HTML “script” tag is present. The less-than symbol and greater-than symbol are replaced by their HTML-encoded version, &lt; and &gt; respectively.

Note that ideally, this method should be combined with user input validation, thus providing a two tiers security architecture for the application.

Further reading:

·          “Cross Site Scripting Explained” - http://www.sanctuminc.com/pdf/WhitePaper_CSS_Explained.pdf

·          “Security Tips: Defend Your Code with Top Ten Security Tips Every Developer Must Know” (see tip #3 – “Prevent Cross-Site Scripting”) - http://msdn.microsoft.com/msdnmag/issues/02/09/SecurityTips/default.aspx

·          “HttpServerUtility.HtmlEncode method” (documentation of the HtmlEncode function) - http://msdn.microsoft.com/library/en-us/cpref/html/frlrfSystemWebHttpServerUtilityClassHtmlEncodeTopic.asp?frame=tr ue


Note:

The documentation for HtmlEncode is identical to that of UrlEncode – this seems to be a mistake in HtmlEncode’s documentation.



4.0 Conclusion
ASP.NET provides several exciting productivity and security features, but these should be understood and used wisely. Failing to use the ASP.NET functions properly results in an insecure web application. We see therefore that ASP.NET does not exempt the programmer from following coding standards and procedures in order to write safe and secure applications.

The ASP.NET coding standards recommended in this paper are:

1.      Using ASP.NET validators to validate user input

2.      Defining and using validators correctly (avoiding pitfalls and shortcomings of validators)
3.      Encrypting the __VIEWSTATE
4.      Using SQL parameters to form SQL queries from user data

5.      Embedding user data as HTML only after HTML-encoding it




In order to verify the programmer’s adherence to secure coding practices, automatic testing of the application’s vulnerability to web application attacks is needed. With the use of an automated security testing tool, this should take place as part of the



Secure Coding Practices for Microsoft.Net Applications
14

ã 2003 Sanctum, Inc. www.Sanctuminc.com


development process to reduce the costs associated with fixing issues that are raised as a result of the testing. And by associating the security problem with the appropriate remedy, and having the programmer react immediately to the problem, the programmer is also undergoing an educational process, which can reduce the likelihood of him/her coding the same mistake again.

To conclude, understanding the recommended coding standards, augmented by using an automatic Web Application Security tool to test the adherence of the code to the standards, results in a systematic bug catching process, shorter find- fix cycles, and an easier learning curve for programmers. This in turn ensures shorter time to market, which is a key for the success of any development organization.

Further reading:

·          Developing Secure Web Applications Just Got Easier: http://www.sanctuminc.com/pdf/WhitePaper_DevSecureAppsJustGotEasier.pdf



5.1    Acknowledgement

The section titled “parameter tampering revisited” is partially based on research conducted together with Ory Segal and Chaim Linhart (both from Sanctum Inc.).

Saturday, 27 May 2017

Microservices with minimum overhead using ASP.NET Web API and Azure – part 2 – Deployment

If you haven’t read the first part, I strongly recommend to do so.
Just to remind, my goal is to create environment that lets us build fast, but build a system that can be easily scaled in the future. Last week, I’ve proposed architecture of .NET service oriented system, which can be both hosted on a single machine and easily spread on multiple microservices. In short, it consists of multiple .NET class libraries containing ASP.NET MVC (Web API) controllers. They can be hosted just by referencing them from one or multiple Web Application projects.
Below, a draft presenting both scenarios:
low_cost_high_cost future processing
This week I’m going to focus on how to use Azure PaaS level services to leverage goal of building system fast.

SHOW ME WHAT YOU GOT – DEPLOYMENT

We all know it is important to deliver pieces of software to product owners and stakeholders as soon as possible so they can see if we are building what they really need. For this reason we want to setup environment for deployments quickly. Time spent on configuring and managing servers is a waste here. Instead, we can deploy system to one of Platform as a Service solutions giving us a lot of setup out of the box. With them, you can really quickly configure platform that runs application without bothering you with machine performance, security, operating system, server management etc.
Let’s quickly deploy source code of initial version of MarketFinder, created in the previous post
https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure/tree/analytics_service_impl
  1. Create Azure SQL database
    1. Go to portal.azure.com,
    2. Select New -> Data + Storage -> SQL Database,
    3. Enter DB name, select or create new server – this is only for authorization purpose,
    4. Select Pricing tier – basic is sufficient for this moment,
    5. Create resource group (you should create one new group for all services associated with particular system),
    6. Use CreateDatabaseTables.sql script from repository to create its structure – initially this can be done manually, however we will soon automate this process to make sure, DB is always up to date with code.
  2. Create new Web App
    1. Select New -> Web + Mobile -> Web App,
    2. Enter name, resource group (one created with database),
    3. Select pricing plan. For sake of early testing and publishing application to stakeholders, free or shared plan is perfect. You can scale it up in seconds, at any time.
  3. Make them work together. We are going to configure Web App to use created SQL Database by overriding web.config settings. This Web Apps feature lets us avoid storing environment dependent keys and passwords in source code repository
    1. Go to created SQL Database blade,
    2. Click “Show database connection string”,
    3. Copy ADO.NET connection string,
    4. Go to created Web App blade,
    5. Select Settings, then Application Settings, scroll down to Connection Strings section and paste database connection string. Do not forget to fill it with password to your server. Azure can’t get it for you because it is hashed.
  4. Web_app_settings_future_processing
  5. Configure continuous deployment
    1. Go to Web App blade,
    2. In Deployment section click “Set up continuous deployment”,
    3. Follow wizard to setup deployment of your choice. In my case, I login to Github, select project linked above and select master branch. You can try this by forking my Github project.
    4. Every time we deploy something to master branch, it is automatically deployed to Web App,
    5. If the above solution do not fit your process, Web App can be configured to wait until we explicitly push source code into it.
Voilà, our application is up and running.

GOING LIVE – PRODUCTION ENVIRONMENT

Created environment is good for development and testing purposes, however before we ship it to public we should care about more aspects. Let’s list some of them, and see what we need to achieve with and without Azure PaaS services.
[table “14” not found /]

TIME FOR SCALING

Everybody wishes to be successful with their application. However, when this happens, one can realize he cannot handle such popularity. Let’s see how we are prepared to scale.

Scale using Azure capabilities

The simplest thing you can do is to scale up, to do so just change pricing plan for service to the higher one. For most PaaS services it takes just few seconds and does not cause service downtime. However, this approach is limited to what machine behind the highest pricing plan can handle.
Another option is scaling out. That means to create multiple instances of our service which are automatically load balanced. To do this we can simply move slider in Azure Portal, or better, configure rules for autoscaling. Typical ruleset will increase number of service instances when average CPU utilization is higher than, for example 50% for 10min, and decrease it when it is lower than 20% for 20min.
There is one important thing to remember. To scale out your application must be stateless, all states need to be moved to external store, like SQL Database of Redis cache. It is good to keep this in the back of your head since the beginning of project development.

Scale by breaking code on services

At some point, the above solution might be insufficient. For example, your SQL Database does not handle traffic, even at the highest pricing plan. Or, simply one component is using a lot of CPU at some point in time and we do not want the whole application to suffer because of this.
Architecture proposed in the previous post lets us simply extract number of components of the system to independent hosts. Then, we can deploy them to independent Azure Web App and SQL databases.

OPTIMIZING COSTS

When our system is getting more traffic, the entire environment can cost quite a sum of money every month. It is hard to anticipate whether it is cheaper to host system on a small number of big services, or on a lot of small ones. With proposed architecture, it is relatively easy to create both configuration, perform tests and look for cost optimum setup for particular system.
Another important aspect of cloud, is that you can easily scale or even shut down services when they are not used. This can be easily automated, and in some cases can cut our costs by more than half. So even though Azure or AWS services might be more expensive than local datacenters, having mentioned flexibility you can pay only for traffic you actually get, not just for being ready to handle a lot of it.

SUMMARY

In this article I’ve tried to present how we can use PaaS level services available in Azure, to both build MVP fast, and meet requirements of live systems when we get to this stage. When buying such services from Microsoft, we are saving time for managing servers and network. Additionally, we are able to achieve system high availability, scalability, security and monitoring without hiring army of Ops.
I’ve briefly described just a couple of services available in Azure, however there are way more of them; just to mention some: Storage, Document DB, Azure Search, Mobile Apps, Logic Apps, Machine Learning. You can really solve most of your problems without leaving comfort of sold as a service platforms.

Microservices with minimum overhead using ASP.NET Web API and Azure – part 1 – Architecture

In the era of building systems that aim to provide services at global scale, requirements for scalability and high availability are becoming our bread and butter. What is more, it is absolutely normal that stakeholders want first shippable version of software as soon as possible.
Due to recent hype, microservices architecture comes to our mid as the first answer to mentioned challenges. However, as it is commonly known: “(…) microservices introduce complexity on their own account. This adds a premium to a project’s cost and risk – one that often gets projects into serious trouble” (Martin Fowler). We can see this requirement does not help to deliver system fast. So where is the golden mean? How to build fast, but be prepared to scale and provide high availability without dramatic, expensive changes to system? Let me propose architecture that answers those questions. At least some of them.

OVERVIEW

The main idea of this article is to organize system into multiple lightweight, logical, decoupled components, rather than multiple independently hosted services. With this approach we can start with hosting all components in single service and dividing them into multiple ones overtime.
Hosts Future Processing
It looks nice as an idea, however what actually hosts and those “mysterious” components here are.
Hosts are, in short, ASP.NET Web API (or MVC) applications that can be hosted as Azure Web Apps or started on developers’ machine. Their only responsibility is to host and initially configure application, but there should be no application logic, or simply: no controllers.
What is this component then? Let’s zoom in!
ComponentX Future Processing
In the suggested architecture, component is a bunch of Web API (MVC) controllers with all related business logic. All those classes are placed outside of ASP.NET Web API host application, usually in simple class library. Single component should consist of controllers and logic related to single business functionality. Usually, each component has its own place to store data, it can be common or dedicated database.
To put all this together, host project just need reference class libraries containing components code. Since now, controllers are available to process HTTP requests sent to host application. Moving component to separate host is as simple as changing reference between projects.
With this approach we benefit from most of microservice architecture advantages, and initially limit pain that comes with it by keeping development environment easy to run on one machine, limiting DevOps work, and cutting off a lot of distributed system problems. Additionally, we achieve flexibility of production environment costs. Initially limited number of machines do not dry out our accounts, however, when necessary, we are able to distribute components to multiple independent hosts, achieving better scalability.

BUILDING ACTUAL SYSTEM

Let’s assume we are creating social system, where users can find and rate nearby shops and markets. For scope of this article we want to:
  • Create, read, update and delete data about markets and user’s ratings.
  • Generate suggestions what another markets user may like too.
And of course our stakeholders want the first version of project quickly, however, at some point after release, they expect system to handle millions of users from entire word.

Let’s build it

Source code of a sample project described in this article can be found here:
https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure
  1. Project structure
    First of all, we need basic project structure.
    1. Create empty ASP.NET Web Application with Web API.
    2. Create empty Class Library and install Microsoft.AspNet.WebApi NuGet Package in it.
    3. Add reference from Web API project to Class Library.
  2. After those steps your solution should look like this:
    Project structure Future Processing
  3. Market management service implementation
  4. As the next step, we want to implement CRUD operations within MakretManagementService. For simplicity of this example I’ve just scaffolded MarkedController and RatingsController using Visual Studio tools.
    Source code of this state of system can be found in repository at following revision
    https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure/tree/market_management_service_impl
    Now, we can test if this solution actually works. To do so we need appropriate tool e.g. Fiddler or PostMan to send HTTP requests to our services. Works on my machine!
  5. Create Analytics service
    The last feature we need before release is suggestions generation. Such feature is expected to be time consuming operation that we want to run periodically, and store results in database so they can be accessed by client applications later on. What is more, in future we may not want this logic to run at the same machine that MakretManagementService, because this heavy operation might slow system down. For this reason we are going to create it as a separate component.To achieve that we just need to:
    1. Create empty Class Library and install Microsoft.AspNet.WebApi NuGet Package in it.
    2. Add reference from Web API project to Class Library.
    Code itself is not important here, so just skip discussion on this.
    Source code of this state of system can be found in repository at following revision
    https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure/tree/analytics_service_impl
    Now we can send POST request to http://<<base_address>/RecommendationsAnalysis and have our suggestions created.
    At this stage system looks as follows:
    MarketFinder Future Processing
    It is important to notice at this point that there is absolutely no coupling between two services on the code level, however as you might noticed it doesn’t come for free. There is some duplication in data access code and data model classes. Although, at this moment, it can be generalized and placed in common class library, it may not be the best solution when we think about long term perspective and further independent development of those services. There are no good and bad solutions, there are only pros and cons in particular context.
    Sounds like we are ready to ship our Minimum Viable Product to customers!

Time for scaling

Until now we have created application optimized for low development and hosting costs. It runs as a single application and store data in single database. Let’s now react to scenario where we already have a lot of data about markets, single database is getting large, calculating user recommendation takes long enough to significantly slow website down for long periods.
Some improvements that should help in a given problem are:
  1. Move recommendations data to different database.
  2. Run Analysis service in separate host application, so we can run it in independent container.
The first change is nearly trivial. We need to create another database with Recommendations table. Next, because our system is already running, we need to migrate data from first database. After that there is a single connection string to change in web.config.
Such change will be that easy only if we took care not to introduce coupling between data behind different components. It is not simple and, again, usually requires some data duplication and synchronization, however that is the cost for possibility of keeping parts of system independent. If you are familiar with Domain Driven Design concept of bounded contexts, you have probably noticed that what I’m suggesting here is similar to concept of aggregates and persisting them as an internally cohesive, decoupled from each other set of entities.
Changes to database creation scripts and project configuration can be analyzed here:
https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure/commit/scale_db
The second change isn’t complicated too, all you need to do is to create new host projects (same way as previously) and add references to appropriate class libraries.
Changes to C# code can be analyzed here:
https://github.com/FutureProcessing/Microservices—ASP.NET-Web-API—Azure/commit/scale_csharp 
Now you can enjoy system running its components in two microservices. From broader perspective it works this way:
MarketFinder.CommonHost  Future Processing
You might have noticed that I’ve left in solution MarketFinder.CommonHost project that is hosting both applications. This project is useful during development, as it is usually faster to start single web site than multiple ones, especially when we have a lot of them.

SUMMARY

Most of community advice against starting system development with microservice architecture, because it comes with huge costs and complexity overhead. In return, it is recommended to build monolithic system and extract microservices overtime when required. Advised approach sounds great, but to divide system into services in future, we need to make sure we do not create highly coupled ball of mud. Architectural approach presented in this article may help us by creating some logical boundaries of system components. Having them prevent us, and our less experiences teammates, from introducing hidden coupling.
However, nothing in software development is the silver bullet. Suggested approach introduces less complexity and cost overhead than fully blown microservices architecture, however it still does. The biggest problem when cutting system on independent components, is that it can never be done without actually cutting of some dependencies that exist. If we find boundaries where those dependencies are minimal, we will benefit from low coupling in future, however when we set those boundaries wrongly, we will heavily suffer from development complexity and probably poor system performance.

Application Design: Going Stateless on Azure


Application Design: Going Stateless on Azure

The components of a cloud application are distributed and deployed among multiple cloud resources (virtual machines) to benefit from the elastic demand driven environment. One of the most important factor in this elastic cloud is the ability to add or remove application components and resources as and when required to fulfill scalability needs.
However, while removing the components, this internal state or information may be lost.
That’s when the application needs to segregate their internal state from an in-memory store to a persistent data store so that the scalability and reliability are assured even in case of reduction of components as well as in the case of failures.  In this article, we will understand ‘being stateless’ and will explore strategies like Database-driven State Management and Cache driven State Management.

Being stateless

Statelessness refers to the fact that no data is preserved in the application memory itself between multiple runs of the strategy (i.e. action). When same strategy is executed multiple times, no data from a run of strategy is carried over to another. Statelessness allows our system to execute the first run of the strategy on a resource (say X) in cloud, the second one on another available resource (say Y, or even on X) in cloud and so on.
This doesn’t mean that applications should not have any state. It merely means that the actions should be designed to be stateless and should be provided with the necessary context to build up the state.
If our application has a series of such actions (say A1, A2, A3…) to be performed, each action (say A1) receives context information (say C1), executes the action and builds up the context (say C2) for next action (say A2). However, Action A2 should not necessarily depend on Action A1 and should be able to be executed independently using context C2 available to it.

How can we make our application stateless?

The conventional approach to having stateless applications is to push the state from web/services out of the application tier to somewhere else – either in configuration or persistent store. As shown in diagram below, the user request is routed through App Tier that can refer to the configuration to decide the persistent store (like, database) to store the state. Finally, an application utility service (preferably, isolated from application tier) can perform state management
The App Utility Service (in the above diagram) takes the onus of state management. It requires the execution context from App Tier so that it can trigger either a data-driven state machine or an event-drive state machine. An example of state machine for bug management system would have 4 states as shown below
To achieve this statelessness in application, there are several strategies to push the application state out of the application tier. Let’s consider few of them.

Database-drive State Management

Taking the same bug management system as an example, we can derive the state using simple data structures stored in database tables.
Current StateEventActionNext State
STARTNewBugOpenNewBug Opened
Bug OpenedAssignedAssignForFixFix Needed
Not A BugMarkClosedBug Closed
Fix NeededResolvedMarkResolvedBug Fixed
ReOpenedAssignForFixFix Needed
Bug FixedTestedMarkClosedBug Closed
ReOpenedMarkOpenFix Needed
Bug ClosedEND
The above structure only defines the finite states that a bug resolution can visit. Each action needs to be context-aware (i.e. minimal bug information and sometimes the state from which the action was invoked) so that it can independently process the bug and identify the next state (especially when multiple end-states are possible).
When we look at database-drive state management on Azure, we can leverage one of these out-of-the-box solutions
  • Azure SQL Database: The Best choice when we want to work with relational & structured data using relations, indexes, constraints, etc. It is a complete suite of MS-SQL database hosted on Azure.
  • Azure Storage Tables: Works great when we want to work with structured data without relationships, possibly with larger volumes. A lot of times better performance at lower cost is observed with Storage Tables especially when used for data without relationships. Further read on this topic – SQL Azure and Microsoft Azure Table Storage by Joseph Fultz
  • DocumentDB: DocumentDB, a NoSQL database, pitches itself as a solution to store unstructured data (schema-free) and can have rich query capabilities at blazing speeds. Unlike other document based NoSQL databases, it allows creation of stored procedures and querying with SQL statements.
Depending on our tech stack, size of the state and the expected number of state retrievals, we can choose one of the above solutions.
While moving the state management to database works for most of the scenarios, there are times when these read-writes to database may slow down the performance of our application. Considering state is transient data and most of it is not required to be persisted across two sessions of the user, there is a need of a cache system that provides us state objects with low-latency speeds.

Cache driven state management

To persist state data using a cache store is also an excellent option available to developers.  Web developers have been storing state data (like, user preferences, shopping carts, etc.) in cache stores ever since ASP.NET was introduced.  By default, ASP.NET allows state storage in memory of the hosting application pool.  In-memory state storage is required following reasons:
  • The frequency at which ASP.NET worker process recycles is beyond the scope of application and it can cause the in-memory cache to be wiped off
  • With a load balancer in the cloud, there isn’t any guarantee that the host that processed first request will also receive the second one. So there are chances that the in-memory information on multiple servers may not be in sync
The typical in-memory state management is referred as ‘In-role’ cache when this application is hosted on Azure platform.
Other alternatives to in-memory state management are out-of-proc management where state is managed either by a separate service or in SQL server – something that we discussed in the last section.  This mechanism assures resiliency at the cost of performance.  For every request to be processed, there will be additional network calls to retrieve state information before the request is processed, and another network call to store the new state.
The need of the hour is to have a high-performance, in-memory or distributed caching service that can leverage Azure infrastructure to act as a low-latency state store – like, Azure Redis Cache.
Based on the tenancy of the application, we can have a single node or multiple node (primary/secondary) node of Redis Cache to store data types such as lists, hashed sets, sorted sets and bitmaps.
Azure Redis cache supports master-slave replication with very fast non-blocking first synchronization and auto-reconnection on net split. So, when we choose multiple-nodes for Redis cache management, we are ensuring that our application state is not managed on single server. Our application state get replicated on multiple nodes (i.e. slaves) at real-time. It also promises to bring up the slave node automatically when the master node is offline.

Fault tolerance with State Management Strategies

With both database-driven state management and cache-driven state management, we also need to handle temporary service interruptions – possibly due to network connections, layers of load-balancers in the cloud or some backbone services that these solutions use. To give a seamless experience to our end-users, our application design should cater to handle these transient failures.
Handling database transient errors
Using Transient Fault Handling Application Block, with plain vanilla ADO.NET, we can define policy to retry execution of database command and wait period between tries to provide a reliable connection to database. Or, if our application is using any version of Entity Framework, we can include SqlAzureExecutionStrategy, an execution strategy that configures the policy to retry 3 times with an exponential wait between tries.
Every retry consumes computation power and slows down the application performance. So, we should define a policy, a circuit breaker that prevents throttling of service by processing the failed requests. There is no-one-size-fits all solution to breaking the retries.
There are 2 ways to implement a circuit breaker for state management –
  • Fallback or Fail silent – If there is a fallback mechanism to complete the requested functionality without the state management, the application should attempt executing it. For example, when the database is not available, the application can fallback on cache object. If no fallback is available, our application can fail silent (i.e. return a void state for a request).
  • Fail fast – Error out the user to avoid flooding the retry service and provide a friendly response to try later.
Handling cache transient errors
Azure Redis cache internally uses ConnectionMultiplexer that automatically reconnects to Redis cache should there be disconnection or Internet glitch. However, the StackExchange.Redis does not retry for the get and set commands. To overcome this limitation, we can use library such as Polly that provide policies like Retry, Retry Forever, Wait and Retry and Circuit Breaker in a fluent manner.
The take-away!
The key take-away is to design applications considering that the infrastructure in cloud is elastic and that our applications should be designed to leverage its benefits without compromising the stability and user experience. It is, hence, utmost important to think about application information storage, its access mechanisms, exception handling and dynamic demand.