The Kendo UI framework is a popular HTML5, jQuery-based tool for building modern web apps, and Burke and Cody from the Kendo UI team recently sat down with us during theCodementor Office Hoursto answer some of their questions concerning Kendo UI.
The text below is a summary done by the Codementor team and may vary from the original video and if you see any issues, please let us know!
What are the best practices for creating a really large SPA?
First of all, Kendo UI does have its own SPA framework, and we do have an entire article series anddemos to help you get started on SPA. The source code for all the demos are in one single file, and it should help you understand how single-page applications work.
The only drawback is that there are no best practices on how you should structure your app. However, if you use visual studio, you can look at this SPA template. It will use require.js and put the views in the appropriate folders, and it will also give you an opinionated structure for your applications.
If you’re not using visual studio, you can look at the Yeoman Kendo UI generator, which will do the same thing for you from the CLI if you’re using Yeomen.
How you should build your applications depends on how your brain works and the type of application you’re trying to do. If you have a hard time figuring out how the parts go together, piggyback on a lot of the backbone material on how to put an application together, since you can follow the same strategy used for backbone.
Should One use HTTP Network Caching or Should the Cache be Stored in an HTML5 local storage?
Step one: If you’re rendering a page, posting to a server and coming back, then you should include any data that you’re coming back with from the server if you can. It may not always be possible, but if you’re using something like MVC, PHP, or some server-side framework that’s allowing you to template out your pages, anything you can get in the page that you don’t have to go back and get from the server on the second trip, it’s just one less request you’ll have to make.
Step two: Kendo UI naturally has Angular integrations, and you probably don’t want to call the datasource again every time the page changes. If you have separate pages in Angular and each one has a KendoUI grid on it, then there should be an attribute called K-refresh. You can set it to false, and it will tell Kendo UI not to get new data when you move around.
Furthermore, there isn’t any official local storage support inside the Kendo UI data source, so you can always write your own local storage just to key value store. Another place to look is Kendo UI labs, which is a community project where a lot of people are contributing, where there is a local storage adapter for the Kendo UI dataSource.
You can see from that Brandon, the creator of that local storage adapter, just extended the dataSource and added in some local storage capabilities.
In general, network (HTTP) caching is good enough, and it has no performance issues. However, if you want an offline application, caching locally makes sense.
Software Testing is a way to validate and verify the working of a particular product or application. It can be incorporated at various points of time in the development process depending upon the methodology and tools used. Testing usually starts after the crystallization of requirements. At a unit level, it starts concurrently with coding; whereas at an integration level, when coding is completed. Testing is used for finding out the bugs in our application. It helps in finding out the failure of the software before it crashes the application. The purpose is to satisfy the stakeholders and ensure the quality of an application with testing.
There are mainly two broad types of software testing - Manual Testing and Automated Testing.
Manual Testing
Manual Testing involves testing the software without any automation script or any tool. Testers check the application or software by taking up the role of an end user. They try to find out if there is any unexpected behavior or failure in the application. Test Management can be taken care of by using test plans and test cases.
Automation testing
Automation testing process involves testing with the help of automation scripts and executing the scripts to run the application with the help of some automation tool. Once the script is ready then these tests can run quickly and efficiently.
Since the cost of automated testing is in the form of efforts and time required to create the scripts, not all tests can be converted to automated test. There should be a valid reason to pay that cost.
Reasons for Automation
1. Regression testing to confirm that new changes have not affected the application adversely. It considers already existing test cases for execution. This is an efficient process when we need to provide feedback to the developer immediately.
2. The test cases need to be iterated multiple number of times often with varying datasets to cover multiple workflow paths.
3. When we require support for agile methodologies.
4. Customized reports are required for monitoring.
Getting Started with Automated Testing
Once the need for automated testing has been established, it involves creation of relevant test scripts. Test script creation can be done only by a skilled testers having knowledge of testing, the suite of tools as well functionality under development. Such resources are costly and their time is a premium. Considering this fact, it is often not possible to budget the automation of all tests. Some of the major decision points while identifying cases for testing automation are
1. System modules where requirements do not change frequently
2. Ample time is at hand to describe a test via scripts
3. The application/software module is critical enough to justify the upfront cost of automation
4. After functional testing we want to do performance testing with multiple virtual users using the same test script.
With the scope of automation decided, next step is to pick the testing tool. The following checklist can help with the selection.
1. The tool should be able to easy to work with. It should execute test cases in unattended manner. It should provide interface to write scripts, efficient IDE and ease of test execution.
2. The tool should provide support to various technologies. It should support testing using different browsers, languages, and types of applications.
3. It should integrate with a software that does Application Lifecycle Management so that it can be used for running automated Build Verification Tests as well as the reports can be integrated with other reports created by ALM software.
Automated testing frameworks today
Today we will compare three automated testing tools namely Selenium, QTP (Quality Test Professional) and Coded UI Test (CUIT) with Visual Studio 2012.
We will consider above mentioned main aspects of automation and see how these tools provide the support for each category.
General information about these tools
Selenium was created by Jason Huggins. It is an open source testing tool. Later Simon Stewart started WebDriver (to overcome some limitations of Selenium). Both the tools are now merged to get one awesome testing tool. HP QTP (Quick Test Professional) was originally written by Mercury Interactive. It is a part of HP Quality Centre Suite (QC). Coded UI Test (CUIT) was introduced by Microsoft along with Visual Studio 2010. It integrates with Team Foundation Server.
Ease of Use
- Recording and Playback Functionality Each of the testing tools has the ability of recording the actions and playback the recorded actions. Selenium provides the plug-in named Selenium IDE with Mozilla Firefox with which the actions can be recorded. QTP provides record button to record a new test. Recording for CUIT can be done using two different tools. Microsoft Test Manager can be used to record actions that can be then converted to CUIT. The fast forward playback is available to run the test case in a semi-automatic mode even before converting to script. With Visual Studio CUIT provides Coded UI Test Builder to record the actions. In my opinion all the 3 tools are very easy for record and playback.
- IDE and tools with which the tester can write the scripts With Selenium IDE there is no special tool and specific technology to write the script. We can insert commands with ‘Table or Source View’ when required.
QTP provides Keyword View to display test steps graphically or Expert View which shows VB Script lines.
For CUIT we can easily use Visual Studio IDE to write scripts.
Selenium IDE comes as a plug-in with Mozilla Firefox. With this we can create a test suite which comprise of various test cases. With Selenium IDE is, you can convert recorded Selenium IDE scripts into different languages and after conversion you can run it in Selenium RC. Selenium RC has two components, one is "Selenium Server" and another is "Selenium Client".
With QTP IDE for the first time 3 add-ins are provided ActiveX, Visual Basic and Web. Various links to best practices, new features for the current version are available with start page. We can either open existing test case or create a new one.
For CUIT we have a very elaborative IDE as the recording can be done using Visual Studio. All the features of Visual Studio are applicable. The script writing support with Visual Studio is an excellent way of writing and debugging. In my opinion CUIT scores more points in this area.
- Ease of Test Case execution With Selenium IDE we have the option of executing the entire test suite already recorded or a test case at a time.
Depending upon the add-ins loaded in QTP IDE the record and run window shows tabs. Windows Application tab is always available. The tests can be executed with run button which in turn opens the run dialog box. We can specify the location for run specific results and provide parameters if any.
CUIT can either be executed with Visual Studio or by using Microsoft Test Manager (MTM). With MTM we can provide various settings for the test case execution so as to gather a lot of information while executing the test case behind the scene. Test Execution can be done is more or less in a very straight manner with the 3 tools. CUIT provides various test settings to execute test cases so as to capture different data when we need to create a bug (commonly called as rich bug).
All the 3 tools can execute test cases without human intervention.
Platform Support
-Language Support Selenium uses Selenese, a high-level, cross platform language to write Selenium commands which is a domain specific language. There are 3 basic categories for the commands - named actions, accessors and assertions. To write tests there are a lot of programming languages like C#, Java, Perl, PHP, Python or Ruby.
QTP scripts can be written with VBScript which is a high-level language with support to everything except polymorphism and inheritance.
For CUIT we can write the script with Visual Studio with which we can use all object programming concepts if required. Writing VBScript code is very easy from tester’s perspective. Even though CUIT supports object oriented programming, the testers may not prefer as it involves a lot of skill for writing or customizing the created script.
- Support for various application types Selenium supports only Web applications.
QTP supports almost any kind of applications.
CUIT supports Windows Applications, Web Application, WPF applications, SharePoint, Office Client Applications Dynamics CRM Web Client applications.
Selenium scores fewer points in this regard as it supports only web application. QTP supports almost all kinds of applications as against CUIT.
- Support for various browsers Selenium supports all versions of IE, Firefox, Safari and Opera and a few more browsers.
QTP supports IE & Firefox. But both do not provide full cross browser support.
CUIT supports only IE8, IE 9 and IE 10 (IE 10 supported only on desktop). There is no support to IE6, IE7, Chrome, Opera or Safari.
Selenium is the clear winner in this respect
- Support for Data Driven Testing Selenium IDE supports xml data source using user extensions.
Data Driven testing is implemented as Excel workbook that can be accessed by QTP. There are 2 types of data sheet global and local. Global sheet is a single one which can be accessed from every action in a test. There can even be a local data sheet associated with every action.
Coded UI Test supports any data source supported by .NET framework which can come in the form of a .CSV file, XML file or any other data source like SQL Server table, Access table etc.
In my opinion CUIT provides better ways of data driven testing.
- Exception Handling Selenium IDE does not support error handling particularly unexpected errors (as it supports only HTML language). Selenium RC will provide support for it (it supports languages with .NET, Java, Perl, Python, PHP, Ruby).
QTP provides VBScript with the help of which we can use On Error statements. As CUIT supports high level languages like C# or VB.Net we can use try catch construct here. In my opinion all the 3 tools have their limitations here.
With CUIT it is suggested to capture the base exception and write code accordingly.
- Validations or AssertionsSelenium assertions can be used in 3 modes assert, verify and waitFor. When an "assert" fails, the test is aborted. When a "verify" (Soft Assertions) fails, the test will continue execution, logging the failure. This facility can be used with TestNg framework. The "waitFor" commands wait for some condition to become true. They will succeed immediately if the condition is already true. However, they will fail and halt the test if the condition does not become true within the current timeout period.
For QTP there are checkpoints: to verify application under test. These are of 10 types – Standard, Table, Image, Bitmap, Database, Text, Text Area, Page, Accessibility, and XML. A checkpoint is a verification point that compares the current value with the expected value. If the current and expected value match it generates a PASS status otherwise FAIL status. We can use Coded UI Test Builder to add assertions for your UI controls. We need to edit the assertion condition as required (equal to, in between, contains etc.), provide expected value and generate code for it. Mouse hover events can be recorded manually if required.
In my opinion QTP and CUIT are better tools to be used in this aspect.
- Support for Objects Object properties are not supported by Selenium. Selenium objects can be managed by using UI element user extensions. QTP comes with in-built object repository.
QTP objects have user friendly names.
Coded UI Test code is written with 3 main parts for UI controls. UIMap.designer.cs, UIMap.cs and UIMap.uitest. The first 2 are different physical files for same partial class while the third is a XML equivalent of all the actions recorded with CUIT Builder. Any changes required can be incorporated with the help of the partial class file. We can also edit the UIMap with the help of Coded UI Editor and find out object’s properties. The CUIT can be completely hand coded if required. Coded UI Test includes a rich API library to code against and a resilient record and playback tool. Coded UI Test can be extended to support custom controls.
CUIT is the clear winner in this regard.
Integration with Application Lifecycle Management and going beyond
- ALM Integration Selenium being an Open Source software can be integrated with other Open Source products for Application Lifecycle Management like QMetry. This in turn can provide platform for software development lifecycle platform in the form of Atlassian Jira (project tracking tool), FogBugz or Bugzilla (bug tracking tool).
QTP being a part of Quality Centre it supports requirement traceability matrix. QTP integrates seamlessly with QC. Test management and mapping the manual testing process with automation becomes a lot easier with this integration
For CUIT and MTM we can provide all the ALM support Team Foundation Server provides. It supports work item tracking, source control or version control, build automation, various reports. The support is in-built; we do not have to do anything extra.
QC is still not complete life cycle management tool. It does not provide support for efforts management, build management or support to different process templates. It supports only test management, bug management and requirement management.
CUIT is a winner here as it seamlessly integrates with Team Foundation Server (TFS). TFS in turn supports work item tracking, source or version control, requirements management, project management, build automation and various reports for monitoring.
· Monitor with Customized Reports Test results can be made available with each tool. Coded UI Test supports all the reports supported by Team foundation Server as well as has the option of creating any custom report. The custom report can be created in any of the 3 ways, by using Report Project with Business Intelligence Development studio (BIDs), by using Microsoft Excel or by using Report Builder facility to create reports on the fly.
- Going beyond Selenium being Open Source a lot of plugins available. Selenium IDE has plug-ins for customization, for adding new functionality to API, changing existing functionality.
QTP provided plug-ins for ActiveX controls, web application and VB objects. Other than these plug-ins for other objects like Microsoft .NET, multimedia plug-ins and Windows Mobile are also available. These QTP plugins available at an additional cost.
Apart from hand coding complete CUIT there is another feature available. CUITe Coded UI Test enhanced is a thin layer developed on top of Microsoft’s Coded UI engine which helps reduce the code. It also increases readability and maintainability. It is very easy to install and will be referred with CUITe.dll in the project. CUITe provides simple object repository. Each of the tools keeps on adding features as per need. Selenium being open source there are a lot of plug-ins.
I have evaluated these tools from different angles and each has got its strength and weaknesses. You may choose a tool depending upon your need and the support the tool provides.
Summarizing the Comparison
The following table provides a bird’s eye view for the categories and the tools’ support for it.
Legend
Conclusion
To conclude, we did a quick overview of what is automation testing and when it a good time is to start thinking about test automation in a software development cycle.
We looked at three popular automation tools, Selenium, CUIT and QTP and gauged their strengths and weaknesses. Final selection of tool is almost always based on budgeting and team strengths (tool familiarity), however for Web Application testing all three have compelling strengths. For desktop application testing the choice gets reduced to two with Selenium dropping out.
All applications need
to retrieve data in SQL Server tables based on DATETIME and/or SMALLDATETIME
columns. In your particular application, you may need to select records that
were entered on a particular date. On the other hand, you might need to select
a set of records that have a DATETIME column value for a particular month, or
year. In other case, you might want to find all the records between two
different dates. Possibly, you might need to find the first, or last record
entered in a given month, day, or year. This article will discuss selecting
records from a database table based on values in a DATETIME, or SMALLDATETIME
column.
Prior to discussing
selecting records for a particular DATETIME value, let's review what specific
values are stored in a given DATETIME and SMALLDATETIME column. From my first article in this series you should
recall that a DATETIME column contains a date and time value, where time is
accurate to milliseconds and SMALLDATETIME columns hold a date and time value,
but the time portion is only accurate to one minute. Since these date/time
columns store the time portion you will need to consider this when searching
for records where the column holds a specific date. You will need to provide
the date and time portion in the search criteria or you may not return any
records or the records you wish to return. If you are not sure of the exact
time associated with the records you want to retrieve you should search based
on a date and/or time range. Let's go through a couple of examples to show you
what I am talking about.
DATE_SAMPLE Table
In order to show you
different methods of searching SQL Server tables, I will need a sample table.
The table I will be using is a very simple table called DATE_SAMPLE and here is
a list of records in that table.
RECORD
-------------------
1
2
3
4
5
6
7
8
9
10
SAMPLE_DATE
-------------------------------------
2001-11-08 00:00:00.000
2002-04-08 16:00:00.000
2003-04-12 16:59:00.000
2003-04-09 00:00:00.000
2003-04-09 08:00:00.000
2003-04-09 14:58:00.000
2003-04-09 23:59:00.997
2003-04-10 00:00:00.000
2003-04-12 00:00:00.000
2003-05-10 00:00:00.000
Common Mistakes When Searching for Dates:
When searching for
dates there are a number of common mistakes that new SQL Server programmers
sometimes make. In this section, I will show you two common date/time pitfalls.
The intent of this
first example is to select all the records in the DATE_SAMPLE table that have a
SAMPLE_DATE equal to '2003-04-09'. Here is the code:
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE = '2003-04-09'
When this code is run
only record 4 is returned. Why are records 5, 6 & 7 not returned? Can you
tell why? Remember DATETIME, or SMALLDATE columns contain not only the date but
also the time. In this particular example SAMPLE_DATE is a DATETIME column, so
all the dates store contain a time, down to the milliseconds. When you specify
a search criteria that only contains a date, like the above example, SQL Server
needs to first convert the string expression '2003-04-09' to a date and time
value, prior to matching the string with the values in the SAMPLE_DATE column.
This conversion creates a value of '2003-04-09 00:00:00.000', which matches
with only record 4.
Another common
mistake is to use the BETWEEN verb like so:
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE between '2003-04-09'
AND '2003-04-10'
When using the
BETWEEN verb all records that are between or equal to the dates specified are
returned. Now if in my example above I only wanted to return records that have
a SAMPLE_DATE in '2003-04-09'. This example returns all the records that have a
SAMPLE_DATE in '2003-04-09' (records 4 - 7), but also returns record 8 that has
a SAMPLE_DATE of '2003-04-10'. Since the BETWEEN clause is inclusive of the two
dates specified, record 8 is also returned.
Now if you really
desire to select all the records in the DATE_SAMPLE table that have a
SAMPLE_DATE sometime in '2003-04-09' you have a couple of options. Let me go
through each option and then explain why one might be better than another
might.
Using the Convert Function:
This first example
selects all records from the DATE_SAMPLE where the date portion of the
SAMPLE_DATE is equal to '2003-04-09'.
SELECT * FROM DATE_SAMPLE
WHERE CONVERT(CHAR(10),SAMPLE_DATE,120) = '2003-04-09'
The reason this
example works, and the first example above does not, is because this example
removes the time portion of the SAMPLE_DATE column prior to the comparison with
string '2003-04-09' being performed. The CONVERT function removes the time
portion by truncating the value of the SAMPLE_DATE field to only the first 10
characters.
SELECTING BASED on a DATE RANGE:
The next example
selects records base on a date range. This example is also going to retrieve
only the records that have a SAMPLE_DATE in '2003-04-09'.
SELECT * FROM DATE_SAMPLE
WHERE SAMPLE_DATE >= '2003-04-09'
AND SAMPLE_DATE <'2003-04-10'
Note that the first
condition uses a greater than and equal (>=) expression instead of just
greater than (>). If only the greater than sign was used the SELECT statement
would not return record 4. This record would not be returned because when SQL
Server converts the string '2003-04-09' to a date/time value it would be equal
to the SAMPLE_DATE on record 4.
Using the DATEPART Function:
Another way to return
the records that have a SAMPLE_DATE for a particular date is to use the
DATEPART function. With the DATEPART function you can build a WHERE statement
that breaks apart each piece (year, month, day) of the SAMPLE_DATE and verifies
that each piece is equal to the year, month and day you are looking for. Below,
is a DATEPART example that once again returns all the records that have a
SAMPLE_DATE in '2003-04-09'.
SELECT * FROM DATE_SAMPLE
WHERE
DATEPART(YEAR, SAMPLE_DATE) = '2003' AND
DATEPART(MONTH,SAMPLE_DATE) = '04' AND
DATEPART(DAY, SAMPLE_DATE) = '09'
Using the FLOOR Function:
As I have said before
there are many ways to accomplish the same thing. Here is a method that uses
the FLOOR and CAST function to truncate the time portion from a date. The inner
CAST function converts a DATETIME variable into a decimal value, then the FLOOR
function rounds it down to the nearest integer value, and then the outer CAST
function does the final conversion of the integer value back to a DATETIME
value.
SELECT * FROM DATE_SAMPLE WHERE
CAST(FLOOR(CAST(SAMPLE_DATE AS FLOAT))AS DATETIME) =
'2003-04-09'
Using the LIKE clause:
The LIKE clause can
also be used to search for particular dates, as well. You need to remember that
the LIKE clause is used to search character strings. Because of this the value
which you are searching for will need to be represented in the format of an
alphabetic date. The correct format to use is: MON DD YYYY HH:MM:SS.MMMAM,
where MON is the month abbreviation, DD is the day, YYYY is the year, HH is
hours, MM is minutes, SS is seconds, and MMM is milliseconds, and AM designates
either AM or PM.
The LIKE clause is
somewhat easy to use because you can use the wildcard to select all the records
in a particular month, AM or PM records, a particular day, and what not. Again
using our DATE_SAMPLE table above, let me show you how the return records using
the LIKE clause.
Say you want to
return all the records with a SAMPLE_DATE in '2003-04-09'. In that case, your
SQL Statement would look like so:
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE LIKE 'Apr 9 2003%'
Note the month is
specified as "Apr", instead of using the numeric "04" value
for April. This SELECT statement, similar to the ones I showed earlier, returns
records 4 through 7.
Now, say you want to
return all the records for April 2003. In this case, you would issues the
following statement:
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE LIKE 'Apr%2003%'
This statement would
return records 3 through 9 from the DATE_SAMPLE table.
If you would like to
return any record that has a SAMPLE_DATE in April regardless of the year, then
the LIKE statement makes this easy. The following statement uses the LIKE
clause to retrieve not only the 2003 records, but also the one record in table
DATE_SAMPLE for 2002.
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE LIKE 'Apr%'
The above statement
would return records 2 through 9.
If you wanted to
return all the records that have a PM designation (RECORD's 2,3,6 and 7), you
could do this easily using the following LIKE clause:
SELECT * FROM DATE_SAMPLE WHERE SAMPLE_DATE LIKE '%PM'
As you can see, the
LIKE statement allows you another alternative to search the database for
records with a particular date criteria that supports wildcard characters.
Finding First Record of the Month
Sometimes you may
want a specific record, although do not know the exact date you need to search
for to find it. You may want to find the RECORD number for the first record
that was inserted in a given month. Since you don't know what the SAMPLE_DATE
date and time might be for the first records, you will need to search for all
records in the desired month, and use the TOP clause to return the first one.
Here is an example that uses the LIKE Clause to return the first record that
has a SAMPLE_DATE in April 2003.
SELECT TOP 1 RECORD FROM DATE_SAMPLE WHERE SAMPLE_DATE
LIKE 'APR%2003%' ORDER BY SAMPLE_DATE
Note that I have used
the ORDER BY clause. The reason for this is due to fact that records in SQL
Server are not necessarily stored in order.
Performance Considerations
If you are searching
large tables with lots of records, you will most likely index some of the date
columns that are commonly used to constrain queries. When a date column is used
in a where clause, the query optimizer will not use an index if the date column
is wrapped in a function. In addition, using the LIKE clause to search for
particular records will keep the query optimizer from using an index thus
slowing down how long it takes SQL Server to complete your query. Let me
demonstrate.
I have now placed a
non-clustered index on column SAMPLE_DATE in the DATE_SAMPLE table called
'SD_IX'. Below there are two different SELECT statements I will be using for my
example.
SELECT * FROM DATE_SAMPLE WHERE
SAMPLE_DATE >= '2003-04-09' AND SAMPLE_DATE <'2003-04-10'
SELECT * FROM DATE_SAMPLE WHERE
CONVERT(CHAR(10),SAMPLE_DATE,121) = '2003-04-09'
The first SELECT
statement selects records without using a function, while the second select
statement uses a CONVERT function. Both SELECT statements return the same
results, all the records for '2003-04-09'. By issuing the "SET
SHOWPLAN_TEXT ON", we can display the execution plans of each SELECT
statement in TEXT format. If you review the execution plans (see below), you
can see that the first SELECT statement uses an index seek on index 'SD_IX',
while the second one only uses a table scan.
Therefore, if
performance is a consideration then it is best to write your code to make sure
it can take advantages of available indexes. Of course if the table you are
searching is quite small in the number of records it contains, then possibly
the performance gains may not out weigh the simplicity of writing code that
uses a function of some kind.
Conclusion:
There are always many
different methods that can be used to search for records that contain dates and
times, and different performance considerations with each. I hope that this
article has given you some insight on the different ways to search SQL Server tables,
using a date in the selection criteria.
My next article,
regarding working with SQL Server date and time variables, will be the last in
this series. It will discuss the use of the DATEDIFF, DATEADD, GETDATE and
GETUTCDATE functions, and how these might be used in your applications.
When you open up Visual Studio 2013 with the intent of building a new ASP.NET MVC 5 project, you find only one option: an ASP.NET Web Application. This is great, as it represents a moment of clarity in a whirlpool of similar-looking and confusing options. So you click and are presented with various options as to the type of project to create. You want ASP.NET MVC, right? So you pick up the MVC option. What you obtain is a no-op demo application that was probably meant to give you an idea of what it means to code for ASP.NET MVC. Even though the result does nothing, the resulting project is fairly overloaded.
You’ll also find several Nuget packages and assemblies referenced that are not required by the sample application, yet are already there to save time for when you need them in action. This is not a bad idea in theory, as nearly any ASP.NET website ends up using jQuery, Bootstrap, Modernizr, Web Optimization, and others. And if you don’t like it, you still have the option of starting with an empty project and adding MVC scaffolding. This is better, as it delivers a more nimble project even though there are many references that are useless at first. The truth is that any expert developer has his own favorite initial layout of the startup project, including must-have packages and scripts.
Although I may be tempted, I don’t want to push my own ideal project layout on you. My purpose, instead, is applying the Occam’s Razor to the ASP.NET MVC project templates that you get in Visual Studio 2013. I’ll start with the organization of project folders and proceed through startup code, bundling, HTML layout, controllers, layers, HTTP endpoints, and multi-device views. Overall, here are ten good practices for sane ASP.NET MVC 5 development.
Advertisement
#1: Project Folders and Namespaces
Let’s say you used the Visual Studio 2013 project template to create a new project. It works, but it’s rather bloated. Figure 1 shows the list of unnecessary references detected by ReSharper.
It’s even more interesting to look at the remaining references. Figure 2 shows what you really need to have referenced in order to run a nearly dummy ASP.NET MVC application.
Here’s the minimal collection of Nuget packages you need in ASP.NET MVC.
The project contains the folders listed in Table 1.
When you start adding Nuget packages, some other conventions start appearing such as the Scripts folder for Modernizr and for jQuery and its plugins. You may also find a Content folder for Bootstrap style sheets and a separate Fonts folder for Bootstrap’s glyph icons.
I find such a project structure rather confusing and usually manage to clean it up a little bit. For example, I like to place all content (images, style sheets, scripts, fonts) under the same folder. I also don’t much like the name Models. (I don’t like the name App_Start either but I’ll return to that in a moment.) I sometimes rename Models to ViewModels and give it a structure similar to Views: one subfolder per controller. In really complex sites, I also do something even more sophisticated. The Models folder remains as is, except that two subfolders are addedLInput and View, as shown in Figure 3.
Generally speaking, there are quite a few flavors of models. One is the collection of classes through which controllers receive data. Populated from the model binding layer, these classes are used as input parameters of controller methods. I collectively call them the input model and define them in the Input subfolder. Similarly, classes used to carry data into the Razor views collectively form the view model and are placed under the Models/View folder. Both input and view models are then split on a per-controller basis.
One more thing to note is the matching between project folders and namespaces. In ASP.NET it’s a choice rather than an enforced rule. So you can freely decide to ignore it and still be happy, as I did for years. At some point—but it was several years ago—I realized that maintaining the same structure between namespaces and physical folders was making a lot of things easier. And when I figured out that code assistant tools were making renaming and moving classes as easy as click-and-confirm, well, I turned it into enforcement for any of my successive projects.
#2 Initial Configuration
A lot of Web applications need some initialization code that runs upon startup. This code is usually invoked explicitly from the Application_Start event handler. An interesting convention introduced with ASP.NET MVC 4 is the use of xxxConfig classes. Here’s an example that configures MVC routes:
For consistency, you can use the same pattern to add your own classes that take care of application-specific initialization tasks. More often than not, initialization tasks populate some ASP.NET MVC internal dictionaries, such as the RouteTable.Routes dictionary of the last snippet (just after the heading for #2).For testability purposes, I highly recommend that xxxConfig methods are publicly callable methods that get system collections injected. As an example, here’s how you can arrange unit tests on MVC routes.
[TestMethod]publicvoidTest_If_Given_Route_Works(){// Arrangevar routes =newRouteCollection();MvcApplication.RegisterRoutes(routes);RouteData routeData =null;// Act & Assert whether the right route was foundvar expectedRoute ="{controller}/{action}/{id}";
routeData =GetRouteDataFor("~/product/id/123",
routes);Assert.AreEqual(((Route) routeData.Route).Url,
expectedRoute);}
Note that the code snippet doesn’t include the full details of the code for customGetRouteDataFor method. Anyway, the method uses a mocking framework to mockHttpContextBase and then invokes method GetRouteData on RouteCollectionpassing the mocked context.
var routeData = routes.GetRouteData(httpContextMock);
Many developers just don’t like the underscore convention in the name of some ASP.NET folders, particularly the App_Start folder. Is it safe to rename this folder to something like Config? The answer is: it’s generally safe but it actually depends on what you do in the project.
The possible sore point is the use of the WebActivator Nuget package in the project, either direct or through packages that have dependencies on it. WebActivator is a package specifically created to let other Nuget packages easily add startup and shutdown code to a Web application without making any direct changes to global.asax. WebActivator was created only for the purposes of making Nuget packages seamlessly extend existing Web applications. As WebActivator relies on an App_Start folder, renaming it may cause you some headaches if you extensively add/refresh Nuget packages that depend on WebActivator. Except for this, there are no problems in renaming App_Start to whatever you like most.
#3 Bundling and Minifying CSS Files
Too many requests from a single HTML page may cause significant delays and affect the overall time-to-last-byte metrics for a site. Bundling is therefore the process of grouping distinct resources such as CSS files into a single downloadable resource. In this way, multiple and logically distinct CSS files can be downloaded through a single HTTP request.
Minification, on the other hand, is the process that removes all unnecessary characters from a text-based resource without altering the expected functionality. Minification involves shortening identifiers, renaming functions, removing comments and white-space characters. In general, minification refers to removing everything that’s been added mostly for readability purposes, including long descriptive member names.
Advertisement
Although bundling and minification can be applied together, they remain independent processes. On a production site, there’s usually no reason not to bundle minified CSS and script files. The only exception is for large and very common resources that might be served through a Content Delivery Network (CDN). The jQuery library is a great example.
Bundling requires the Microsoft ASP.NET Web Optimization Framework available as a Nuget package. Downloading the Optimization Framework also adds a few more references to the project. In particular, they are WebGrease and Microsoft Infrastructure. These, in turn, bring their own dependencies for the final graph, shown in Figure 4.
Bundles are created programmatically during the application startup in global.asax. Also in this case, you can use the xxxConfig pattern and add some BundlesConfig class to the App_Startup folder. The BundleConfig class contains at least one method with code very close to the following snippet.
The code creates a new bundle object for CSS content and populates it with distinct CSS files defined within the project. Note that the Include method refers to physical paths within the project where the source code to bundle is located. The argument passed to the StyleBundle class constructor instead is the public name of the bundle and the URL through which it will be retrieved from pages. There are quite a few ways to indicate the CSS files to bundle. In addition to listing them explicitly, you can use a wildcard expression:
Once CSS bundles are defined invoking them is as easy as using the Styles object:
@Styles.Render("~/Bundles/Css")
As you can figure from the last two snippets, ASP.NET optimization extensions come with two flavors of bundle classes: the Bundle class and the StyleBundle class. The former only does bundling; the latter does both bundling and minification. Minification occurs through the services of an additional class. The default CSS minifier class is CssMinify and it is based on some logic packaged in WebGrease. Switching to a different minifier is easy too. All you do is using a different constructor on the StyleBundle class. You use the constructor with two arguments, the second of which is your own implementation of IBundleTransform.
#4 Bundling and Minifying Script Files
Bundling and minification apply to script files in much the same way as bundling and minifying CSS files. The only minor difference is that for frequently used script files (such as jQuery) you might want to use a CDN for even better performance. In operational terms, bundling and minifying script files requires the same process and logic as CSS files. You use the Bundle class if you’re only concerned about packing multiple files together so that they are captured in a single download and cached on the client. Otherwise, if you also want minifying, you use the ScriptBundle class.
Like StyleBundle, ScriptBundle also features a constructor that accepts an IBundleTransform object as its second argument. This object is expected to bring in some custom logic for minifying script files. The default minifier comes from WebGrease and corresponds to the JsMinify class.
It’s very common today to arrange very complex and graphically rich Web templates that are responsive to changes in the browser’s window size and that update content on the client side through direct access to the local DOM. All this can happen if you have a lot of script files. It’s not, for the most part, JavaScript code that you write yourself. It’s general-purpose JavaScript that forms a framework or a library. In a nutshell, you often end up composing your client-side logic by sewing together multiple pieces, each of which represents a distinct download.
Considering the general recommendation of using as few script endpoints as possible—and bundling does help a lot in that regard—the optimal position of the <script> tags in the body of the HTML page is an open debate. For quite some time, the common practice was to put <script> elements at the end of the document body. This practice was promoted by Yahoo and aimed at avoiding roadblocks during the rendering of the page. By design, in fact, every time the browser encounters a <script> tag, it stops until the script has been downloaded (or recovered from the local cache) and processed.
It’s not mandatory that all script files belong at the bottom. It’s advisable that you distinguish the JavaScript that’s needed for the page to render from the JavaScript that serves other purposes. The second flavor of JavaScript can safely load at the bottom. Well, mostly at the bottom. Consider that as the page renders the user interface in the browser, users may start interacting with it. In doing so, users may trigger events that need some of the other JavaScript placed at the bottom of the page and possibly not yet downloaded and evaluated. If this is your situation, consider keeping input elements that the user can interact with disabled until it’s safe to use them. The ready event of jQuery is an excellent tool to synchronize user interfaces with downloaded scripts. Finally, consider some techniques to load scripts in parallel so that the overall download time becomes the longest of all instead of the sum of all downloads. The simplest way you can do this is through a programmatically created <script> element. You do this using code, as shown below.
var h = document.getElementsByTagName("HEAD")[0];var script = document.createElement("script");
script.type ="text/javascript";
script.onreadystatechange =function(){...};
script.onload =function(){...};
script.onerror =function(){...};
document.src ="...";
h.appendChild(script)
Script elements are appended to the HEAD element so that parallel download begins as soon as possible. Note that this is the approach that most social Web sites and Google Analytics use internally. The net effect is that all dynamically created elements are processed on different JavaScript threads. This approach is also employed by some popular JavaScript loader frameworks these days.
#5 The Structure of the _Layout File
In ASP.NET MVC, a layout file is what a master page was in classic Web Forms: the blueprint for multiple pages that are ultimately built out of the provided template. What should you have in a master view? And in which order?
With the exceptions and variations mentioned a moment ago for parallelizing the download of multiple scripts, there are two general rules that hold true for the vast majority of websites. The first rule says: Place all of your CSS in the HEAD element. The second rule says: Place all of your script elements right before the closing tag of the <body> element.
There are a few other little things you want to be careful about in the construction of the layout file(s) for your ASP.NET MVC application. First off, you might want to declare explicitly that the document contains HTML5 markup. You achieve this by having the following markup at the very beginning of the layout and subsequently at the beginning of each derived page.
<!DOCTYPE html>
The DOCTYPE instructs older browsers that don’t support specific parts of HTML5 to behave well and correctly interpret the common parts of HTML while ignoring the newest parts. Also, you might want to declare the character set in the HEAD block.
Another rather important meta-tag you’ll want to have is the viewport meta-tag whose usage dates back to the early days of smartphones. Most mobile browsers can be assumed to have a rendering area that’s much larger than the physical width of the device. This virtual rendering area is just called the "viewport." The real size of the internal viewport is browser-specific. However, for most smart phones, it’s around 900 pixels. Having such a large viewport allows browsers to host nearly any Web page, leaving users free to pan and zoom to view content, as illustrated in Figure 5.
The viewport meta-tag is a way for you to instruct the browser about the expected size of the viewport.
In this example, you tell the browser to define a viewport that is the same width as the actual device. Furthermore, you specify that the page isn’t initially zoomed and worse, users can’t zoom in. Setting the width property to the device’s width is fairly common, but you can also indicate an explicit number of pixels.
In ASP.NET MVC, pay a lot of attention to keeping the layout file as thin as possible. This means that you should avoid referencing from the layout file CSS and script files that are referenced by all pages based on the layout. As developers, we certainly find it easier and quicker to reference resources used by most pages right from the layout file. But that only produces extra traffic and extra latency. Taken individually, these extra delays aren’t significant, except that they sum up and may add one or two extra seconds for the page to show and be usable.
Advertisement
In ASP.NET MVC, a layout page consists of sections. A section is an area that derived pages can override. You might want to use this feature to let each page specify CSS and script (and of course markup) that needs be specific. Each layout must contain at least the section for the body.
The markup above indicates that the entire body of the page replaces @RenderBody. You can define custom sections in a layout file using the following line:
@RenderSection("CustomScripts")
The name of the section is unique but arbitrary and you can have as many sections as you need with no significant impact on the rendering performance. You just place a @RenderSection call where you want the derived page to inject ad hoc content. The example above indicates a section where you expect the page to insert custom script blocks. However, there’s nothing that enforces a specific type of content in a section. A section may contain any valid HTML markup. If you want to force users to add, say, script blocks, you can proceed as follows:
In this case, overridden sections are expected to contain data that fits in the surrounding markup; otherwise, a parsing error will be raised. In a derived page, you override a section like this:
@sectionCustomScripts{
alert("Hello");}
#6 (Don’t) Use Twitter Bootstrap
Twitter Bootstrap is quickly becoming the de-facto standard in modern Web development, especially now that Visual Studio 2013 incorporates it in the default template for ASP.NET MVC applications. Bootstrap essentially consists of a large collection of CSS classes that are applicable directly to HTML elements and in some cases to special chunks of HTML markup. Bootstrap also brings down a few KBs of script code that extends CSS classes to make changes to the current DOM.
As a matter of fact, with Bootstrap, you can easily arrange the user interface of Web pages to be responsive and provide advanced navigation and graphical features.
The use of Bootstrap is becoming common and, as popularity grows, also controversial. There are reasons to go for it and reasons for staying away from it. My gut feeling is that Bootstrap is just perfect for quick projects where aesthetics are important but not fundamental, and where you need to provide pages that can be decently viewed from different devices with the lowest possible costs. Key arguments in favor of Twitter Bootstrap are its native support for responsive Web design (RWD), deep customizability, and the not-secondary fact that it’s extremely fast to learn and use.
Bootstrap was created as an internal project at Twitter and then open-sourced and shared with the community. When things go this way, there are obvious pros and cons. For Web developers, it’s mostly about good things. Bootstrap offers a taxonomy of elements that you want to have in Web pages today: fluid blocks, navigation bars, breadcrumbs, tabs, accordions, rich buttons, composed input fields, badges and bubbles, lists, glyphs, and more advanced things, such as responsive images and media, auto-completion, and modal dialogs. It’s all there and definable through contracted pieces of HTML and CSS classes. Put another way, when you choose Bootstrap, you choose a higher level markup language than HTML. It’s much the same as when you use jQuery and call it JavaScript. The jQuery library is made of JavaScript but extensively using it raises the abstraction level of JavaScript.
By the same token, using Bootstrap extensively raises the abstraction level of the resulting HTML and makes it look like you’re writing Bootstrap pages instead of HTML pages. This is just great for developer-centric Web solutions. It’s not good for Web designers and for more sophisticated projects where Web designers are deeply involved.
When you choose Bootstrap. you choose a higher-level markup language than HTML. It’s much the same as when you use jQuery and call it JavaScript.
From a Web designer’s perspective, Twitter Bootstrap is just a Twitter solution and even theming it differently is perceived like work that takes little creativity. From a pure Web design perspective, Bootstrap violates accepted (best) practices. In particular, Bootstrap overrides the HTML semantic and subsequently, presentation is no longer separate from the content. Not surprisingly, when you change perspective, the same feature may turn from being a major strength to being a major weakness. Just because Bootstrap overrides the HTML semantic, it tends to favor an all-or-nothing approach. This may be problematic for a Web design team that joins an ongoing project where Bootstrap is being used. In a nutshell, Bootstrap is an architectural decision—and one that’s hard to change on the go. So, yes, it makes presentation tightly bound to content. Whether this is really an issue for you can’t be determined from the outside of the project.
Last but not least, the size of Twitter Bootstrap is an issue. Minified, it counts for about 100K of CSS, 29K of JavaScript plus fonts. You can cut this short by picking exactly the items you need. The size is not an issue for sites aimed at powerful devices such as a PC, but Bootstrap for sites aimed at mobile devices may be a bit too much. If you’re going to treat desktop devices differently from mobile devices, you might want to look into the mobile-only version of Bootstrap that you find at .
#7 Keep Controllers Thin
ASP.NET MVC is often demonstrated in the context of CRUD applications. CRUD is a very common typology for applications and it’s a very simple typology indeed. For CRUD applications, a fat controller that serves any request directly is sometimes acceptable. When you combine the controller with a repository class for each specific resource you handle, you get good layering and achieve nifty separation of concerns.
It’s essential to note that the Model-View-Controller pattern alone is not a guarantee of clean code and neatly separated layers. The controller simply ensures that any requests are routed to a specific method that’s responsible for creating the conditions for the response to be generated and returned. In itself, an action method on a controller class is the same as a postback event handler in old-fashioned Web Forms applications. It’s more than OK to keep the controller action method tightly coupled to the surrounding HTTP context and access it from within the controller method intrinsic objects such as Session, Server, and Request. A critical design goal is keeping the controller methods as thin as possible. In this way, the controller action methods implement nearly no logic or very simple workflows (hardly more than one IF or two) and there’s no need to test them.
Advertisement
As each controller method is usually invoked in return to a user’s gesture, there’s some action to be performed. Which part of your code is responsible for that? In general, a user action triggers a possibly complex workflow. It’s only in a basic CRUD, like the very basic Music Store tutorial, that workflow subsequent to user actions consists of one database access that the resource repository carries out. You should consider having an intermediate layer between controllers and repositories. (See Figure 6.)
The extra layer is the application layer and it consists of classes that typically map to controllers. For example, if you have HomeController, you might also want to have some HomeService class. Each action in HomeController ends up calling one specific method in HomeService. Listing 1 shows some minimalistic code to illustrate the pattern.
The Index method invokes the associated worker service to execute any logic. The service returns a view model object that is passed down to the view engine for the actual rendering of the selected Razor view. Figure 7 shows instead a modified project structure that reflects worker services and the application layer of the ASP.NET MVC application.
#8 Membership and Identity
To authenticate a user, you need some sort of a membership system that supplies methods to manage the account of a user. Building a membership system means writing the software and the related user interface to create a new account and update or delete existing accounts. It also means writing the software for editing any information associated with an account. Over the years, ASP.NET has offered a few different membership systems.
Historically, the first and still largely used membership system is centered on the Membership static class. The class doesn’t directly contain any logic for any of the methods it exposes. The actual logic for creating and editing accounts is supplied by a provider component that manages an internal data store. You select the membership in the configuration file. ASP.NET comes with a couple of predefined providers that use SQL Server or Active Directory as the persistence layer. Using predefined providers is fine, as it binds you to a predefined storage schema and doesn’t allow any reuse of existing membership tables. For this reason, it’s not unusual that you end up creating your own membership provider.
Defining a custom membership provider is not difficult at all. All you do is derive a new class from MembershipProvider and override all abstract methods. At a minimum, you override a few methods such as ValidateUser, GetUser, CreateUser, andChangePassword. This is where things usually get a bit annoying.
The original interface of the Membership API is way too complicated with too many methods and too many quirks. People demanded a far simpler membership system. Microsoft first provided the SimpleMembership provider and with Visual Studio 2013, what appears to be the definitive solution: ASP.NET Identity.
In the ASP.NET Identity framework, all of the work is coordinated by the authentication manager. It takes the form of the UserManager<T> class, which basically provides a façade for signing users in and out.
publicclassUserManager<T>where T :IUser{:}
The type T identifies the account class to be managed. The IUser interface contains a very minimal definition of the user, limited to ID and name. The ASP.NET Identity API provides the predefined IdentityUser type that implements the IUser interface and adds a few extra properties such as PasswordHash and Roles. In custom applications, you typically derive your own user type inheriting from IdentityUser. It’s important to notice that getting a new class is not required; you can happily work with native IdentityUser if you find its structure appropriate.
User data storage happens via the UserStore<T> class. The user store class implements the IUserStore interface that summarizes the actions allowed on the user store:
As you can see, the user store interface looks a lot like a canonical repository interface, much like those you might build around a data access layer. The entire infrastructure is glued together in the account controller class. The skeleton of an ASP.NET MVC account controller class that is fully based on the ASP.NET Identity API is shown in Listing 2.
The controller holds a reference to the authentication identity manager. An instance of the authentication identity manager is injected in the controller. The link between the user store and the data store is established in the ApplicationDbContext class. You’ll find this class defined by the ASP.NET MVC 5 wizard if you enable authentication in the Visual Studio 2013 project template.
The base IdentityDbContextclass inherits from DbContextand is dependent on Entity Framework. The class refers to an entry in the web.config file, where the actual connection string is read. The use of Entity Framework Code First makes the structure of the database a secondary point. You still need a well-known database structure, but you can have the code to create one based on existing classes instead of manual creation in SQL Server Management Studio. In addition, you can use Entity Framework Code First Migration tools to modify a previously created database as you make changes to the classes behind it.
Currently, ASP.NET Identity covers only the basic features of membership but its evolution is not bound to the core ASP.NET Framework. Features that the official interfaces don’t cover yet (such as enumerating users) must be coded manually, which brings you back to the handcrafted implementation of membership.
#9 Expose HTTP Endpoints
An architecture for Web applications that’s becoming increasingly popular is having a single set of HTTP endpoints—collectively known as Web services—consumed by all possible clients. Especially if you have multiple clients (like mobile applications and various Web frontends) a layer of HTTP endpoints is quite helpful to have. Even if you only have a single client frontend, a layer of HTTP endpoints is helpful as it allows you to have a bunch of Ajax-based functionalities integrated in HTML pages. The question is: How would you define such endpoints?
If you need an API—or even a simple set of HTTP endpoints—exposed out of anything but ASP.NET MVC (such as Web Forms or Windows services) using Web API is a no-brainer. But if all you have is an ASP.NET MVC application, and are tied to IIS anyway, you can simply use a separate ASP.NET MVC controller and make it return JSON.
Advertisement
There are many posts out there calling for a logical difference between Web API controllers and ASP.NET MVC controllers. There’s no doubt that a difference exists because overall Web API and ASP.NET MVC have different purposes. Anyway, the difference becomes quite thin and transparent when you consider it from the perspective of an ASP.NET MVC application.
With plain ASP.NET MVC, you can easily build an HTTP façade without learning new things. In ASP.NET MVC, the same controller class can serve JSON data or an HTML view. However, you can easily keep controllers that return HTML separate from controllers that only return data. A common practice consists in having an ApiController class in the project that exposes all endpoints expected to return data. In Web API, you have a system-provided ApiController class at the top of the hierarchy for controllers. From a practical perspective, the difference between ASP.NET MVC controllers and Web API controllers hosted within the same ASP.NET MVC is nearly non-existent. At the same time, as a developer, it’s essential that you reason about having some HTTP endpoints exposed in some way.
Web API and ASP.NET MVC have different purposes.
#10 Use Display Modes
One of the best-selling points of CSS is that it enables designers and developers to keep presentation and content neatly separated. Once the HTML skeleton is provided, the application of different CSS style sheets can produce even radically different results and views. With CSS, you can only hide, resize, and reflow elements. You can’t create new elements and you can add any new logic for new use-cases.
In ASP.NET MVC, a display mode is logically the same as a style sheet except that it deals with HTML views instead of CSS styles. A display mode is a query expression that selects a specific view for a given controller action. In much the same way, the Web browser on the client processes CSS media query expressions and applies the appropriate style sheet; a display mode in server-side ASP.NET MVC processes a context condition and selects the appropriate HTML view for a given controller action.
Display modes are extremely useful in any scenario where multiple views for the same action can be selected based on run-time conditions. The most compelling scenario, however, is associated with server-side device detection and view routing. By default, starting with ASP.NET MVC 4, any Razor view can be associated with a mobile-specific view. The default controller action invoker automatically picks up the mobile-specific view if the user agent of the current request is recognized as the user agent of a mobile device. This means that if you have a pair of Razor views such as index.cshtml and index.mobile.cshtml, the latter will be automatically selected and displayed in lieu of the former if the requesting device is a mobile device. This behavior occurs out of the box and leverages display modes. Display modes can be customized to a large extent. Here’s an example:
var tablet =newDefaultDisplayMode("tablet"){ContextCondition=(c =>IsTablet(c.Request))};var desktop =newDefaultDisplayMode("desktop"){ContextCondition=(c =>returntrue)};
displayModes.Clear();
displayModes.Add(tablet);
displayModes.Add(desktop);
The preceding code goes in the Application_Start event of global.asax and clears default existing display modes and then adds a couple of user-defined modes. A display mode is associated with a suffix and a context condition. Display modes are evaluated in the order in which they’re added until a match is found. If a match is found—that is, if the context condition holds true—then the suffix is used to complete the name of the view selected. For example, if the user agent identifies a tablet, then index.cshtml becomes index.tablet.cshtml. If no such Razor file exists, the view engine falls back to index.cshtml.
Display modes are an extremely powerful rendering mechanism but all this power fades without a strong mechanism to do good device detection on the server side. ASP.NET lacks such a mechanism. ASP.NET barely contains a method in the folds of the HttpRequest object to detect whether a given device is mobile or not. The method is not known to be reliable and work with just any old device out there. It lacks the ability to distinguish between smartphones, tablets, Google glasses, smart TVs, and legacy cell phones. Whether it works in your case is up to you.
If you’re looking for a really reliable device detection mechanism, I recommend WURFL, which comes through a handy Nuget package. For more information on WURFL, you can check out my article that appeared in the July 2013 issue of CODE Magazine, available at the following URL: .
Display modes are extremely useful in any scenario where multiple views for the same action can be selected based on run time conditions.
Summary
ASP.NET MVC 5 is the latest version of Microsoft’s popular flavor of the ASP.NET platform. It doesn’t come with a full bag of new goodies for developers but it remains a powerful platform for Web applications. ASP.NET is continuously catching up with trends and developments in the Web space and writing a successful ASP.NET MVC application is a moving target. This article presented ten common practices to build ASP.NET MVC applications with comfort and ease.
Domain Model
Unless you are creating a plain CRUD application, the model of business data (also referred to as the domain model) is referenced from a separate assembly and is created and mapped to persistence using any of the known approaches in Entity Framework, whether Database-first, Model-first, or Code-first.
Bundling, Minification, and Debug Mode
Bundling and minification are not a functional feature and are, rather, a form of optimization. For this reason, it makes no sense for you to enable both while in debug mode. For common libraries such as jQuery or Bootstrap, a direct reference to the minified version is acceptable. By default, bundling and minification are disabled until you compile in release mode. The code discussed in this article, therefore, won’t work until you set debug=false in the web.config file. If you only want to check bundling and minification in release mode but leave debug mode on until deployment, you can set the EnableOptimizations property to true on the BundleTable class.
Ratchet 2.0
Recently, the project codenamed Ratchet reached version 2.0 and merged and synced up with Bootstrap. Ratchet can be considered the mobile-only version of Bootstrap. It largely follows the same approach and design and offers the same benefits in terms of raising the abstraction level of the HTML being used. The ideal scenario for Ratchet is mobile sites and HTML5 applications, whether compiled native through Cordova or pinned to the dashboard of mobile operating systems.
Listing 1: A layered controller class
publicinterfaceIHomeService{IndexViewModelGetModelForIndex();}publicclassHomeController:Controller{privatereadonlyIHomeService _service;publicHomeController():this(newHomeService()){}publicHomeController(IHomeService service){
_service = service;}publicActionResultIndex(){var model = _service.GetModelForIndex();returnView(model);}}publicclassViewModelBase{publicStringTitle{get;set;}}publicclassIndexViewModel:ViewModelBase{// More members here}
Listing 2. Sample account controller class
publicclassAccountController:Controller{publicUserManager<IdentityUser>UserManager{get;set;}publicAccountController(UserManager<IdentityUser> manager){UserManager= manager;}publicAccountController():this(newUserManager<IdentityUser>(newUserStore<IdentityUser>(newApplicationDbContext()))){}// Other members here}
Table 1: Typical project folders.
Folder name
Intended goal
App_Data
Contains data used by the application, such as proprietary files (e.g., XML files) or local databases
App_Start
Contains initialization code. By default, it simply stores static classes invoked from within global.asax.
Controllers
Folder conventionally expected to group all controllers used by the application
Models
Folder conventionally expected to contain classes representing the rather nebulous MVC concept of "model"
Views
Folder conventionally expected to group all Razor views used by the application. The folder is articulated in subfolders,one per controller