Thursday 24 January 2019

Design Best Practices for an Authentication System

The IEEE Center for Secure  Design (CSD) is part of a cybersecurity  initiative launched by IEEE Computer  Society. The Center  provides guidance  on a variety of cybersecurity-related topics.  Here, we focus on best practices for designing  an authentication system.
  • Aaron Bedra, Eligible
  • John Downey, Braintree
  • Matt Konda, Jemurai

Use an Authentication Mechanism That Can’t Be Bypassed or Tampered with

  • Danny Dhillon, EMC
  • Denis Piliphuk, Oracle
To begin with, when creating  an authentication system, there  are two common  designs from which to choose.

Authentication as a Filter

The first school  of thought  is to push  all requests through  a centralized login system, only allowing endpoints to respond after the authentication system verifies the session and proxies the request. The filter approach is achieved through  standard routing and networking. All requests to the application pass through  this point before arriving at their destination, and are gated  if the requestor isn’t authenticated. Figure 1 shows  the high-level design.
CSD_fig1
Figure 1. Pushing all requests through a centralized login system to use authentication as a filter. This approach uses standard routing and networking.
In this scenario, all traffic is filtered through an authentication proxy. This inspects the request for relevant  information  (a valid cookie, OAuth token, and so on) and verifies it. Upon successful verification, the request is sent to the appropriate service via a routing layer to be completed. If the authentication couldn’t be performed, then the proxy will ask the user  to provide valid credentials before continuing.  This example  assumes that the system is composed of several components. If your system is a single program, where all parts run under the same codebase, you’ll naturally fall into this category.  As with any choice, there  are benefits and drawbacks to this approach.

Pros

  • All traffic is pushed by default to the authentication service. This means that  any URLs that  are intended to be accessible without authentication would need  to be specifically identified within a whitelist.  This easily can be reviewed for correctness by automated tests.
  • It reduces the burden on additional services.  For instance, if a single request ended up producing five internal  requests, we might not want to have five separate authentication events to complete the request. This also  reduces the load on the database that  contains user credentials, which can create an availability concern  if overloaded.

Cons

  • Creating a choke point for authentication means that additional  engineering  will be required  to maintain  availability at scale. This introduces operational and architectural complexity and requires additional  resources (hardware,  caching, and so on) to be properly constructed.
  • It leaves internal systems unauthenticated. There are ways to manage this (such as internal HTTP headers or mutually authenticated protocol exchange). This might not be an issue for some  designs, but it must  be carefully considered when choosing  this option.

Individual Endpoint Authentication

The second pattern (see  Figure 2) is to have each  endpoint  take  responsibility  for authen- ticating requests. It might be used  in conjunc- tion with other authentication architectures to create internal  layers of authenticated requests when additional  controls  are required for accessing data  (such as the detokenization of credit card data;  see PCI-DSS 3.1  from the PCI Security Standards Council at www.pcisecuritystandards.org/security_standards).
CSD_fig2
Figure 2. Individual endpoint authentication, where each endpoint takes responsibility for authenticating requests.
An alternate approach (see  Figure 3) uses the same general  layout with authentication mechanisms in each  service,  but makes a service call to an authentication endpoint instead of authenticating inside  the service.
CSD_fig3
Figure 3. An alternate approach to individual endpoint authentication. This approach uses the same general layout with authentication mechanisms in each service, but makes a service call to an authentication endpoint instead of authenticating inside the service.

Pros

  • Trust is defined at every border, creating a system that allows for different authentication scenarios based on data types.  This allows for better  definition of trust  zones  when necessary.

Cons

  • If designed incorrectly, this can lead to unnecessarily repeating authentication. In turn, this can create an unnecessary load on critical infrastructure, leading to availability issues.

Additional Design Patterns

The following are some  other  useful  design options  to consider.
Exporting  resource definitions. A well-designed system should  be capable of exporting its assumption of the world. This means that  administrators should,  with a simple  tool or command, be able to ask  for a report of the exposed URL patterns and their corresponding access requirements. For filter-based  authentication, this means a list of protected and whitelisted endpoints. This report should  be available  in a programmatically  accessible format (such as XML, JSON, or CSV) to allow for automated testing. This allows at a minimum for base system assumptions to be verified on a routine (daily) basis, and also  helps  seed penetration testing.
Explicit authentication bypass (whitelist). The filter architecture will, by default,  provide an “always-on” authentication approach. This sets up the system for an explicit whitelist.
We generally prefer this approach because it’s less error-prone. Every endpoint  that bypasses authentication will have to be manually enabled and, in most  development environments, tracked  by version control changelogs and production  log books.  If the filter approach isn’t taken, endpoints that bypass authentication should  be explicit and easily managed. The default  should  be that authentication is always required.
Use a known standard. Building a solid and secure authentication system isn’t easy.  It requires careful thought  and effort. The likelihood is high that  a home-grown authentication system will be incorrect.  In the grand scheme of things, most  likely your core business isn’t building a system for authenticating requests. Choose  a framework that  fits your technology stack and provides  as many of the aforementioned recommendations as possible. Some common  authentication protocols  and standards include the following:
Integrate with a third party  (if applicable). If possible, don’t run your own authentication system. There are a number  of third-party authentication providers available,  but choosing  the proper provider(s) for a particular situation should  be handled on a case-by- case basis. This option works best in publicly available  application environments, and isn’t suitable for every application, especially  on- premise ones, due to policy or technical reasons. For such  cases, especially  in more traditional  enterprises, applications can be configured instead to delegate authentication tasks to internally maintained instances of centralized authentication providers.
What to look for: evaluating an authentication framework. It can be challenging  to evaluate a new authentication framework. There are lots of things to look for when making your choice. The aforementioned details describe scenarios and consequences of choices made  during the authentication system design. This distillation  should  serve  as a checklist for evaluation. In general, not every item must  be satisfied for the framework to be considered for use, but relevant  risks  and tradeoffs should  be considered.

Authentication Framework Evaluation Checklist

  • Provides the ability to exchange credentials (username/password, token, and so on) for a valid session.
  • Supports proper session management (www.owasp.org/index.php/Session_Management_Cheat_Sheet).
  • Lets users opt in to two-factor authentication.
  • In a browser-based environment, properly marks the session cookie as HTTPOnly (www.owasp.org/index.php/HttpOnly) and secure (www.owasp.org/index.php/SecureFlag).
  • Provides support for Cross-Site Request Forgery (CSRF; goo.gl/TwcSJX) protection/ defenses.
  • Supports token-based authentication mechanisms (such as OAuth).
  • Supports proper password storage (www.owasp.org/index.php/Password_Storage_Cheat_Sheet).
  • Provides integration with third-party authentication providers.
  • Logs all authentication activity (and supports proper audit trails  of login/ logout, token  creation  and exchange, revocation,  and so on).
  • Has a public record of good security response, disclosure, and fixes.
  • Supports secure account-recovery flows (third-party authentication providers make this easier).
  • Never exposes credentials in plaintext, whether in user interfaces, URLs, storage, logs, or network communications.
  • Enforces use of credentials with sufficient entropy.
  • Protects against online brute-force attacks.
  • Protects against session fixation attacks.

Authorize after You Authenticate

  • Danny Dhillon, EMC
  • Denis Piliphuk, Oracle
When architects start  planning application and individual components, one of the first things they must  decide  is where access checks occur and how they’re carried out. There are multiple options for performing authorization checks, and opportunities to get it wrong. However, the main rule that  must  be universally followed–no matter which model the team  chooses to implement–is that  all authorization decisions and enforcement should  take  place at the server  side.  There’s no such  thing as a client-side  authorization—at best, it can serve  as a usability improvement.
Authorization can protect  actions such as file system access, network socket operations, and other  low-level actions tied to the operating system, language, or framework. Authorization can also  protect higher-level actions such  as funds transfer, purchase history, and other  business logic actions. In this section, we focus solely on authorization concerns with the web application users, omitting server-side component and backend authorization concerns.
Access  controls  can be specified  on the entity, or subject, performing an action or actions. Subject-based access controls can limit the subject on executing  actions, writing data  to executed actions, and/or reading  data  from executed actions. It’s also imperative  to always use  trustworthy data when making authorization decisions. After all, allowing the request to specify requested privilege and permitted actions, limits, and so on simply defeats the purpose of server- based authorization checks.
Creating an access control policy consisting entirely of coarse-grained URLs isn’t practical  for those web applications that consist of only a handful of anchor  URLs, along with dynamically generated pages or endpoints for other  content-based resources.
The following example  shows  an abstraction of a URL-based access control policy. Path elements such  as _acc and cf_ comp come from the underlying platform, while viewProfile and drawChartsdenote functional  endpoints exposed by the application itself.  Overall, this construct won’t match  the application’s resource hierarchy visible to its administrator.
Here’s a simplified example:
grant principal Joe {/app/abc/_acc/cf_comp/usr/viewProfile, GET}
grant principal Joe {/app/abc/_acc/cf_comp/usr/updateProfile, POST}
grant principal Joe {/app/abc/_view/cf_comp/graphs/drawCharts, GET}
grant principal Admin {/app/abc/_acc/cf_comp/mng/loadAccounts, POST}
Instead of using HTTP-based terms  for resources and actions, good authorization policy engines should  allow the use  of application-specific terminology to express resource hierarchy and actions (again, using an abstract text-based policy representation for this example).
grant principal Joe res=Profile actions={view,  modify}
grant principal Joe res=Chart actions={view}
grant principal Admin res=AccountsMgr actions={create,delete}
Not all system users are born equal,  and the level of their authority should  depend upon which part of an application they’re currently trying to access. That is, somebody who’s a privileged user  in one application (or line of business) doesn’t have to hold similar privileges in other  parts of a system (or relevant  applications). For instance, consider an HR user  with access to the company’s personnel-  and performance-management system consisting of several integrated modules and sharing  user  accounts. While that  HR user  holds significant privileges in the application’s personnel-management portion, having a read  access to the system’s performance-management portion might suffice for their work duties. In that case, their trust  level (and corresponding privileges) should  be determined by whether they’re currently dealing with the application’s personnel-  or performance-management part.

Role-Based Access Control

What can be done  to allow web applications to differentiate privileges granted  to their users? One easy  option is to grant user  ac- count  privileges via statically  defined  roles, also  known as role-based access control (RBAC; see Figure 4). While this approach works for applications with simple  access control models, it quickly gets  out of hand as the number  of roles,  tied to various user  and group privileges, explodes. An application that  needs to make  account access decisions  based on the user’s office location, role in the company’s hierarchy, relationship to the account, and so on will have an increasingly  difficult time capturing  all of these nuances with a traditional  static  RBAC model and, especially, maintaining  it over a longer period of time. In large, interconnected systems, it becomes nearly impossible to determine who has  access to particular  objects or functions, which can result  in granting excessive privileges to some  users or not revoking privileges in a timely manner  when a user’s status changes.
CSD_fig4
Figure 4. Role-based access control (RBAC). This approach grants user account privileges via statically defined roles. Although it’s easy to implement, it’s generally better to use only with applications that have simple access control models.

Pros

  • It’s simple to implement.
  • It’s well-supported by all major web application platforms and containers.
  • The approach is easily understood by developers and users alike.

Cons

  • Developers might be tempted to hardcode roles into application code.
  • As the complexity of access control logic increases, the number of corresponding roles explodes, resulting in a maintenance nightmare  and runtime problems.
  • The static role assignments can become stale and must be forcibly refreshed to pick up the latest changes–this can be a highly time-consuming operation on large systems.

Attribute-Based Authorization

Products with highly demanding security models should  plan on utilizing dynamic role mapping  and authorization based on the user’s profile attributes rather  than static security policies—the so-called  attribute-based authorization model (see  Figure 5). Under this more flexible model, user  roles and privileges are dynamically resolved  at runtime  based on the resource and action  combination, and can take  into account additional  attributes attached to the user’s account.
CSD_fig5
Figure 5. Attribute-based authorization model. This model offers more flexibility, resolving user roles and privileges dynamically at runtime, based on the resource and action combination.

Pros

  • Roles are resolved dynamically based on the requested resource and action, allowing for significantly greater flexibility of policy design.
  • Fewer policies are necessary, as user-profile attributes are used to make access decisions at runtime.

Cons

  • There’s no built-in support from major web platforms and containers–although this might be available as an add-on option.
  • Implementation is significantly more complicated, beyond the capabilities of regular web application teams–and thus an external solution is advised.
  • Policy design is less intuitive for development teams.

Centralized Authorization

To ensure consistent authorization enforcement across a large codebase, we recommend that  you centralize  your authorization logic (see Figure 6).
CSD_fig6
Figure 6. Centralized authorization. Although initial setup is more complex and expensive, this approach ensures consistent authorization across a large codebase. (JEE = Java Platform, Enterprise Edition; PDP = Policy Decision Points; and PEP = Policy Enforcement Points.)
As an application never exists  in isolation,  the web application’s team  must consider target  execution  environments. A traditional enterprise application will rely on an array of integrated backend services and applications, which all come  with authorization capabilities and requirements. Managing an array of disjoint services could quickly overwhelm an IT department and lead to inconsistent security policies  and gaps. To make  this process more manageable and consistent, large organizations with complex IT environments often rely on centrally managed security policies,  which are then pushed to individual services. RFC 2904 (https://tools.ietf.org/html/rfc2904) uses the term Policy Decision Points (PDP) for the policy management servers. The authorization checks performed  at individual services are called Policy Enforcement Points  (PEP). It’s the application development team’s responsibility to design  their product  for such environments and avoid locking in a particular authorization model, which could prove incompatible with models  used  in target environments.
Consumer-oriented applications, on the other  hand,  have another set  of challenges. While their policy models  are typically simpler due to fewer types  of objects and classes of principals,  scalability  of their authorization engines plays a critical role. Such applications  (think social  media  portals  or popular gaming sites) will potentially handle  millions of users. Responsiveness and resource consumption of their policy engines under peak load can create availability issues.

Pros

  • This simplifies policies management across heterogeneous environments with many components and systems.
  • It ensures consistency of access control rules across all integrated layers.
  • It might help with audit requirements.

Cons

  • The initial setup is significantly more complex and expensive.
  • Dedicated administration is necessary.
  • Some applications could experience performance impact due to remote calls to PDP.

Adaptive Authorization

For sensitive operations (such as a funds transfer), it’s important  to consider adding another layer of user  attestation for building higher confidence in the user’s identity and intentions. This is known as adaptive authorization, and is based on collecting and ana- lyzing additional  information  about  a user’s historical  behavior patterns. In case of suspicious  behavior,  the user  might be asked to reconfirm their identity by either  re-entering the password, or the system might require an additional  authentication factor.
For example, consider an employee  in a retailer’s finance department who handles payments to its suppliers. This employee  has always authorized  payment  transfer requests to domestic suppliers from their home office location in the continental US during daytime hours, but suddenly  issues a nighttime funds transfer to an offshore  company  from a location in Asia. While most  traditional authorization policies  will allow this request to proceed (assuming the user  doesn’t exceed his transfer limits), the adaptive authorization model will likely notice  odd behavior and act according  to the configured policies.  It might require the user  to provide additional  authentication to proceed, or hold the funds and have the transaction reviewed and confirmed by additional  authorized users to prevent  fraud by the employee  or somebody using his stolen credentials.

Pros

  • This helps with catching fraudulent requests not otherwise detectable with traditional  access control methods.
  • The system can be trained to learn new access patterns and fringe cases.

Cons

  • This might result in false positives, denying access to legitimate  requests.
  • A knowledge base of normal and abnormal access patterns constitutes the most important  part of the adaptive system; it either  must be supplied with the system, or built from scratch during system setup.
  • Initial setup is significantly more complex and expensive.
  • This approach has increased administration overhead.
  • There could be a performance impact due to additional calls and analysis.

Design Notes

All established web platforms—such as Java Platform, Enterprise Edition (JEE) or ASP.NET—provide interception layers to automatically route  all incoming requests through their respective authorization frameworks. This type of interception works for coarse URL-based access control checks, but is often insufficient for making business- logic authorization checks. For instance, in the following sample request, we can base authorization policy on the request type (such as GET, POST, PUT, or DELETE), RefererContent-Type,  Content-Length, and other  HTTP-specific attributes.
POST /contact/form/message?t=1430597514418 HTTP/1.1
Host: abc.com
Content-Type: application/x-www- form-urlencoded; charset=utf-8
Referer: http://contacts.abc.com/
Content-Length: 58
email=td%2540td.com&name=fre&message=ffewedd
While useful  for low-level decision  making (for instance, at the Internet-facing  front-end HTTP servers), this might be insufficient for some  business-level authorization decisions. More complex access control processing might need  to take  place—for  example, in an application or component-specific front gate  or a dedicated wrapper, injected  at the entry points  to business logic services. At that  point, the call parameters can be interpreted not simply as generic  parameters of HTTP GET or POST methods, but, for instance, as stock  symbols,  locations, limits, and so on. These  business-specific request parameters can then  be checked against an authorization policy expressed in business- specific terms.

Authorization Framework Evaluation Checklist

  • Supports a provider-based model and lets you configure alternative authorization and role-mapping providers.
  • Supports delegating authorization and role-mapping providers to allow evaluating multiple types  of policies  in the context  of a single request.
  • Enables dynamic role evaluation to reevaluate user roles in the context of a specific action or access to some  resource.
  • Includes policy-simulation capabilities to answer the following questions: Can user X access resource Y? Who can access re- source Y?
  • Allows policy modeling in native application terminology, as opposed to generic  HTTP terms.
  • Provides PEP for all major components of the application under consideration.
  • Meets your scalability and latency requirements.

Strictly Separate Data and Control Instructions, and Never Process Control Instructions Received from Untrusted Sources

  • Danny Dhillon, EMC
To ensure that  we maintain  control of the actual  instructions running within an application, control must  be strict and specifically ensure that  untrusted data  are never treated as application instructions. There are a variety of ways that  this breaks down in real systems:
  • cross-site scripting (XSS),
  • SQL injection,
  • command injection,
  • unsafe serialization and deserialization, and
  • unsafe reflection.
At a conceptual level, each  of these potential security issues stems from the same root cause: untrusted data being incorporated into an application and then  executed or interpreted in an unplanned way.

Approaches

To comprehensively prevent  these types  of vulnerabilities, we recommend the use  of application- and framework-level approaches that  reliably inhibit introducing such  bugs during application development.

Cross-Site Scripting

Approach: Use HTML markup/templating systems that only produce encoded  output (goo.gl/9ZDStx).  XSS vulnerabilities  can be avoided by adopting  the convention  that all HTML markup must  be produced  by APIs and libraries that guarantee correct, context-specific encoding  and validation of data  interpolated into HTML markup.  In many cases, application developers use HTML templating  systems to implement  the generation of HTML markup.
These  templating  systems are easy  to use and default  to encoded output.  Often, though, it’s difficult to apply a new templating system across a large application surface.
Also, it’s common  for exceptions to arise where rich content is intended. With these frameworks, in some  cases, applications are still exposed to certain  types  of XSS–see the Open Web Application Security Project’s (OWASP’s) cheat  sheet on preventing XSS for more information  (goo.gl/3ImU1k). All languages and platforms  have support for manual  output  encoding. Any time that frameworks  can’t be used, output  encoding should  be used.

Additional Considerations

Input validation isn’t a recommended approach for preventing  XSS. It’s often possible to bypass input validation because validation is written with brittle regular expressions that  don’t account for encoding. Furthermore, data  are frequently shared between systems. For more information, we recommend reading  Christopher  Kern’s “Securing  the Tangled Web” (http://research.google.com/pubs/pub42934.html).
Context-aware  output  encoding  is a natural  evolution of the standard output encoding  mentioned thus  far. It means that  content is being written with the understanding of where in a rich HTML document it’s going to be used. There are different rules  for what’s acceptable within the body, tag attributes, URLs, scripts, and so on.

Examples


Framework Evaluation

In evaluating  frameworks, it’s recommended that developers check  the following:
  • Does the framework perform output encoding by default?
  • Is the documentation clear that overriding the output encoding  could allow for a vulnerability?
  • Do static analysis tools identify when the default  behavior has  been  overridden?
  • Does the framework support contextual encoding?
  • Has the framework been reviewed for security?
  • Does the framework have a track record of responding to security issues?

SQL Injection

Approach: Use an object-relational mapping (ORM) that offers a rich API and parameterizes queries by default (www.owasp.org/index.php/SQL_Injection). SQL injection vulnerabilities can be avoided  by using frameworks  that perform parameterized queries by default.  A parameterized query protects the database engine  from running untrusted input as part of the query structure.
It’s important  that  the data  access framework supports a rich API to aid developers in building complex queries through  the API. This serves to discourage arbitrarily complex but error-prone string concatenation to build queries.

Additional Considerations

It’s possible to augment API functions  with helpers that  perform additional  checking. It’s usually possible to identify anti-patterns when this approach is used, because string concatenation functions  represent deviations from the desired pattern.

Examples

  • Hibernate (Java),
  • NHibernate (.NET),
  • ActiveRecord (Ruby),
  • SQL Alchemy (Python), and
  • Entity Framework (.NET).

Framework Evaluation

  • Ensure that the persistence mechanism builds dynamic parameterized queries.
  • Ensure that the API is flexible enough  to accommodate complex queries that will be required  (so that developers can realistically use  the API).
  • Has the framework been reviewed for security?
  • Does the framework have a track record of responding to security issues?

Command Injection

Approach: Avoid system commands or use a library to escape the input (www.owasp.org/index.php/Command_Injection). Command injection vulnerabilities  often  depend on altering a system command through  meaningful characters such  as a semicolon. As such,  command injection vulnerabilities  can be avoided  by using frameworks  that  perform user  data  escapes before  issuing  the command.

Additional Considerations

As an example, in Ruby, there’s a library called Shellwords  (http://ruby-doc.org/stdlib-2.0.0/libdoc/shellwords/rdoc/Shellwords.html) that  can translate a potentially malicious  string input into an innocuous string.
puts Shellwords.escape(“abc-’;def”)
abc-\’\;def
Another consideration is to use  popen, which gives programmers explicit control over all aspects of the process launch.

Examples

  • Shellwords (Ruby/Python)
  • ShellQuotes (Perl)

Framework Evaluation

  • Opt for escaping libraries that are available within the language or a core framework.
  • Ensure that the escaping library handles common  cases of operating system special characters.
  • Has the framework been reviewed for security?
  • Does the framework have a track record of responding to security issues?

Serialization and Deserialization

Approach: Avoid writing your own serialization libraries, and know which are intended to be able to handle malicious input. It’s quite common  for applications to parse serialized data  that  have been  received  from an untrusted source. Parsing  code  that’s implemented in a non-memory-safe  language, especially  if the format is a binary one, can be prone to memory-corruption bugs.
Deserializers that  transform a serialized representation (in XML or JSON, for example) into corresponding data  objects are often implemented using reflection.  Mistakes in the design  of reflection-based deserializers can result  in vulnerabilities  where the deserialization of untrusted input might cause unintended code  to execute (for example, during object construction, or via access to nontrivial setter methods).
Some  serialized representations have complex features that  can result  in security issues if supported or enabled in the parser. For example, XML supports so-called  external entities, which refer to an external  resource identified by a URL included  in the input XML. If external  entity resolution is enabled in the XML parser, a maliciously crafted  XML document might instruct  the XML processor to source and include any resource identified by a URI.

Additional Considerations

The following are recommendations around serialization and deserialization.
  • Avoid writing ad hoc implementations of parsers, especially in non-memory-safe languages. Instead, use a well-vetted library or parser generator.
  • When using third-party libraries, carefully consider whether they’re suitable for processing untrusted inputs,  and review their security record.
  • Don’t use Python’s pickle module (https://docs.python.org/2/library/ pickle.html) to process (that is, “unpickle”) untrustworthy inputs.
  • Don’t use Ruby’s YAML or Marshal to process untrustworthy  inputs.
  • The use of eval should be strictly avoided.  Some  JavaScript  libraries  use eval to parse JSON, because JSON is a subset of JavaScript. This can  result  in malicious  JavaScript  embedded in JSON being executed.
  • When choosing a library that unmarshals serialized forms into objects, consider approaches that don’t rely on runtime  reflection, and  instead rely on compile-time  code  generation (such  as Protocol Buffers or Thrift). This completely  avoids  risks  related  to the use  of reflection.
  • Avoid the use of ad hoc string concatenation to produce  serialized forms, relying instead on a well-vetted library to do so.  When choosing  a library, consider its security  record,  and whether  it comprehensively addresses injection issues through  appropriate validation  and  escaping. Use the framework provided for URL processing.
  • Avoid “stringly typed” data: don’t introduce application domain-specific string  representations for structured data (such  as colon-separated string representations of tuples). Instead, use structured data  types.  When building cross-platform applications, consider a standard interchange format  such  as Thrift.

Framework Evaluation

  • Opt for serialization libraries that are available within the language or a core framework.
  • Prefer formats that can be suitably configured to parse entirely untrustworthy serialized forms.
  • Ensure that XML parsers are configured to not resolve  external  entities.
  • Has the framework been reviewed for security?
  • Does the framework have a track record of responding to security issues?

Unsafe Reflection

Approach: Favor frameworks  that support explicit wiring as opposed  to reflection (www.owasp.org/index.php/Unsafe_Reflection). Some  web frameworks  support a convention-over-configuration  paradigm, where (for instance) specific  request handlers are automatically wired up with request URL paths through  naming conventions related  to the names of handler classes and methods. Implementations of such  frameworks  typically achieve  this through  the use  of reflection  or reflection-like mechanisms in the underlying language.
Although this is desirable and convenient from a developer’s perspective, this approach to framework design  can result  in considerable security risks.  By design, it exposes control over code execution  (such as control over the reflective invocation of particular  methods) to external  attackers (including, for example, components of an HTTP request, or path components that are used  to directly designate a method  to be executed). This can result  in security problems at two levels: First, there might be bugs in the framework itself that permit an attacker to cause execution  of code that isn’t meant to be directly invoked by an external  entity, and whose  execution  has  security consequences. Second, it might result in the inadvertent external  exposure of application-level functions whose direct invocation has  security consequences.

Additional Considerations

Expression languages (EL) can pose  a significant risk. If an attacker can cause evaluation  of attacker-controlled expression strings, this can result  in the attacker’s ability to execute arbitrary code  on the server.

Conclusion

Adopting frameworks  that  enforce  clear separation of the data  and control structures is a general  way to address a number of classes of common  software  security vulnerabilities. Whenever possible, lean toward adopting  frameworks  that  provide these controls.

Framework Evaluation

  • Opt for frameworks that don’t by default expose controller endpoints or routes.
  • Lean toward frameworks that allow explicit wiring.
  • Prefer frameworks whose implementations have  been  security reviewed.
  • Review the framework’s vulnerability history for issues in this area.

Understand How Integrating External Components Changes Your Attack Surface

  • Edward Bonver, Symantec
  • Danny Dhillon, EMC
Applications often  incorporate large amounts of third-party code  into libraries. As an example, a simple  Spring template application generated from the  Spring Initializr includes  57  dependencies. A similar Rails application template generated with Rails Composer includes  96 dependencies.
Some  industry experts estimate that more  than  80  percent of the  code  included in an average project  is actually code from these third-party libraries.  Given that  any code  can  have  vulnerabilities, it’s important  to understand that vulnerabilities  can  be introduced to an application through  these third-party libraries–and a significant  portion of the risk involved in building an application can come  from these dependencies.

Approaches

We recommend the following approaches to prevent  such  vulnerabilities.

Automate Scanning for Known Vulnerable Components

Manually reviewing a code  base for vulnerable  dependencies is a slow and error-prone task. It’s a great  candidate for automation. Automated  dependency- checking  tools  scan application dependencies against a database of existing vulnerabilities. These  automated tools  often  can be added in to build automation and continuous integration systems that  provide feedback early and often.

Examples

Additional Considerations

  • To get the most out of automated scanning, it’s useful to set  it up as part of a continuous integration system.
  • There should be a mechanism to update the database.
  • Dependency versions should be scoped.

Build a Vulnerability Triage and Response Plan

It’s useful  to have a defined process for handling vulnerabilities  in library dependencies. Often it makes sense to define tiers of criticality with different response-time windows, such  as the following:
  • Critical—fix as soon as possible while managing availability risk.
  • Important—fix within the next sprint or release window.
  • Background—fix within a quarter with other application updates.
The response plan should  be agreed  upon by stakeholders up front, so that  it can be followed when the time comes.

Subscribe to Mailing Lists for Security Announcements

Even with automation, manually reviewing dependency vulnerability is still necessary. To help you keep  up to date, you can subscribe to specific mailing lists  for your dependencies,  or use  catch-all lists  (such as the following examples).

Examples

Because the approaches mentioned aren’t frameworks, there’s no evaluation  checklist that  supports them.

Conclusion

The attack surface of an application includes substantial code  from third-party frameworks. It’s critical to identify and address vulnerabilities  in these dependencies. Automated  tools  can help to identify these issues early in development and make  it easier to update. A process for triaging them can help to keep  them  prioritized across stakeholders.

Acknowledgments

This document, along with others, came  to fruition through the collaborative  efforts of many participants at the CSD’s 2015 workshops. We thank everyone for their contributions, especially John Downey and Matt Konda.

Contact Us

If you’re interested in keeping  up with the IEEE Center  for Secure  Design’s activities, follow us on Twitter @ieeecsd or via our website (http://ieeecybersec.wpengine.com/). If you would like to help with CSD activities, contact us at ieee-csd@ieee.org.

cc.logo

by-sa

Public Access Encouraged

Because the authors, contributors, and publisher are eager  to engage the broader  community in open  discussion, analysis, and debate regarding a vital issue of common  interest, this document is distributed under  a Creative Commons BY-SA license. The full legal language of the BY-SA license is available  here: http://creativecommons.org/licenses/by-sa/3.0/legalcode.
Under this license, you are free to both share (copy and redistribute the material  in any medium  or format) and adapt (remix, transform, and build upon the material  for any purpose) the content of this document, as long as you comply with the following terms:
Attribution–You must give appropriate credit,  provide a link to the license, and indicate  if changes were made. You may use any reasonable citation format,  but the attribution  may not suggest that  the authors or publisher has a relationship with you or endorses you or your use.
“ShareAlike”–If you remix, transform, or build upon the material, you must distribute your contributions under  the same BY-SA license as the original. That means you may not add any restrictions beyond those stated in the license, or apply legal terms or technological measures that  legally restrict others from doing anything the license permits.
Please note  that  no warranties are given regarding  the content of this document.  Derogatory use of the content of this license to portray the authors, contributors, or publisher in a negative  light may cancel  the license under  Section  4(a). This license may not give you all of the permissions necessary for a specific intended use.

No comments:

Post a Comment