Don’t Disrespect the Web.Config – Encryption

As stated in an earlier post…

There are 3 things every public website should be doing with their web.config

  1. Use web.config transforms to keep track of development versus production settings
  2. Encrypt important configuration sections for security
  3. ELMAH – Error Logging Modules and Handlers

In this post, we’ll discuss how to encrypt sensitive sections of the web.config so passwords and other information cannot be easily read by those who gain access to the file.

Why is it important?

Encrypting sensitive sections of the web.config is important because they are just that, sensitive. Think about your production web.config file. It may contain all sorts of data that you would not want to be accessible. There are often passwords for SQL database connections, passwords to an SMTP server, API Keys, or critical information for whatever system is being automated. In addition, web.config files are usually treated as just another source code file. There are probably many versions in your source control system right now. That means, any developer on the team, or more accurately anyone with access to the source code, can see what information is in the web.config file.

In many cases, storing passwords in a web.config is itself unnecessary and should be avoided. However, we know it is all too easy to fall into the trap of placing them in this flexible, convenient file. Therefore, at the very least, certain sections should be encrypted so they cannot be easily read or used for evil.

Encrypting ConnectionStrings

In our example, we will encrypt two typical configuration sections: ConnectionStrings and AppSettings on a Windows 7 development machine.

Follow the below steps:

1. Open a command prompt with elevated, Administrator, privileges:

2. At the command prompt, enter:

cd “C:\Windows\Microsoft.NET\Framework\v4.0.30319”

3. Now enter the following to encrypt the ConnectionStrings section:

aspnet_regiis.exe -pef “connectionStrings” “C:\WebApplication1\WebApplication1”

In this case, C:\WebApplication1\WebApplication1 is the directory where our web.config is located.

4. Enter the following to encrypt the AppSettings section:

aspnet_regiis.exe -pef “appSettings” “C:\WebApplication1\WebApplication1”

For reference on all the command-line options of aspnet_regiis.exe, refer to this MSDN page.

Decrypting ConnectionStrings

Of course, it is possible you might need to be able to read the original, unencrypted, data at a later time. To access that information is easy. Simply perform the previous steps but use the command-line option “-pdf” to decrypt the important sections.

Deployment

Deploying your web application with encrypted web.config sections is simple, but it may not be obvious. This StackOverflow answer explains the steps best. Generally, any server or development machine that uses the same encrypted web.config data must use the same RSA key pair, which can be exported using the aspnet_regiis tool.

Before and After

There you have it. You have successfully encrypted 2 sections of your web.config file. Take a look below to observe the before and after results:

Before

<connectionStrings>

<add name=ApplicationServices connectionString=data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true
providerName=System.Data.SqlClient />

</connectionStrings>

After

<connectionStrings configProtectionProvider=RsaProtectedConfigurationProvider>

<EncryptedData Type=http://www.w3.org/2001/04/xmlenc#Element
xmlns=http://www.w3.org/2001/04/xmlenc#>

<EncryptionMethod Algorithm=http://www.w3.org/2001/04/xmlenc#tripledes-cbc />

<KeyInfo xmlns=http://www.w3.org/2000/09/xmldsig#>

<EncryptedKey xmlns=http://www.w3.org/2001/04/xmlenc#>

<EncryptionMethod Algorithm=http://www.w3.org/2001/04/xmlenc#rsa-1_5 />

<KeyInfo xmlns=http://www.w3.org/2000/09/xmldsig#>

<KeyName>Rsa Key</KeyName>

</KeyInfo>

<CipherData>

<CipherValue>fK275KFHx9RKip16DTpwxLi4AHpvCpat4S3edgsDwco9PgudsMKc1qAyh9qNt2y+90qV4QIzyZXm8j27UV5J+R5rNruMUOROLWzVt8qkRYRM3ADoiCi5BJh2SsjE0guGXFbufZDgRpPFV5bstgZSBPYNiYXQF/aOLyQjPCE8VDo=</CipherValue>

</CipherData>

</EncryptedKey>

</KeyInfo>

<CipherData>

<CipherValue>CSdausUH7yWcY8t1sPUqiCooYreEauzi4t33gVJuWYcfhspsguTchJjwthUTMLqnulYRmCu8ZnhrVBepQo7PHO/4k5mwo3s46TsgFddvvUlyY/EDQf047LG0pocBDxL3MgIGf3b+atoG29Jg0Wnhj+M6urYG55Ko4nGp36JILQptlEn+sqCl2sQ99izykXtRWP7kC4tldO+YvBuZ7x8fyGoANwSKQFo7cH+dbydvCkRvaFQsRATdsQKGmSrXwIlkoNvxFb1CBPx0qDenyCs+vO4QyF2CZ8QB+UIJzA8EL7W/FovH5zDczjXQWTsFSmsI+vSojl9G9jSVLJFbwOpQBLIKxfximl5r</CipherValue>

</CipherData>

</EncryptedData>

</connectionStrings>

In a future blog post, we will discuss the 3rd party component ELMAH, which is vital to being notified when your users encounter exceptions in your web application.

 

Update 07/24/2012
It is possible to combine encryption with web.config transforms. I know this will work as I have done it before.

In my experience, I’ve done the following. I had to add an RSA section at the top of my web.config. For me, this went into my web.config.release as I did not encrypt my default/development web.config:

<configProtectedData defaultProvider=”MyRsaProtectedConfigurationProvider” xdt:Transform=”Insert”>
<providers>
<add name=”MyRsaProtectedConfigurationProvider”
type=”System.Configuration.RsaProtectedConfigurationProvider, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL”
keyContainerName=”NetFrameworkConfigurationKey_viternuscom”
useMachineContainer=”true” />
</providers>
</configProtectedData>

Also, the web.config sections need to be replaced at the section level. You cannot replace the name or connection string attributes. So, you can use something like the below configuration to replace the whole connection strings section in each environment:

<connectionStrings configProtectionProvider=”MyRsaProtectedConfigurationProvider” xdt:Transform=”Replace”>
<EncryptedData Type=”http://www.w3.org/2001/04/xmlenc#Element&#8221;
xmlns=”http://www.w3.org/2001/04/xmlenc#”&gt;
<EncryptionMethod Algorithm=”http://www.w3.org/2001/04/xmlenc#tripledes-cbc&#8221; />
<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;
<EncryptedKey xmlns=”http://www.w3.org/2001/04/xmlenc#”&gt;
<EncryptionMethod Algorithm=”http://www.w3.org/2001/04/xmlenc#rsa-1_5&#8243; />
<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”&gt;
<KeyName>Rsa Key</KeyName>
</KeyInfo>
<CipherData>
<CipherValue>fV3NsFhZR/l0/5nvioFfjwjhhauNUTR96fQOK3QeRTW05ERDAQrFGj9MBt5Jh7Ca4rIS2JZfOfNTjTxWiEp/tjk+9LXVyPKrJYMiNlYiUmZGfV/amPsLPmRm2pOEyKwJhJLN6NyZdht/xGrf1ClDKO6CG1ViA5pK5R8Db8X9ul4=</CipherValue>
</CipherData>
</EncryptedKey>
</KeyInfo>
<CipherData>        <CipherValue>eFvSbAzbVUzwa9Sl8V6t43kuwAcvmaPUjboSJ/oi+MMJXyqtqXS8dKSuxBy+E0rC8tUxxIfJppZNm+CCoKf9Rm39vW2flpgcsvm8ZNMekSf4r2GWYAvLw3vYvMBcbnFRqktlaM7cXia38+3KGN8skHzxioqrBgy2QQqqPWIPrmrCS440BRlXEck6XwAO9rZOERgM6+OtlRan4EuGoB0O4acJWbp51zWxkfzqxMb600BHkYzeIYkHH8GNvWo+LSQt6o+NYW+Q7sm/lLFY5hPp3pGTOygXPehT1b/3BWZM+1dJ5sh8sBXO+t5m7/Dzqt4nvMqArmdEUvQdhYAPauC3Uj9HjDFpHkbOjVEzohIvB0kJ1Wc3uP4VvE6CRMbAsrRiSNLDlT6OpXYVrArLk9c1bBA56nFXPMxLEpN1umRcCfaQY0qxKrZi/yJ8dKD/C/5Vo7o50f10jM9eUrt3/uS71bNJk5U9N7kO42tZZGXZMui51o6MWcYxSC7VQ3KdCpy6UacBnD8MYr7EHeZ591ATQds8dzcsXY7w6Lsg1pXLK74HqXMW/xDeLtBoWJxat9y+</CipherValue>
</CipherData>
</EncryptedData>
</connectionStrings>

I hope this helps.

Advertisement

Implications of Windows Azure Container Types

In 7 Reasons I Used Windows Azure for Media Storage, I described the download process involved in streaming a large video through a Silverlight applet using the Microsoft cloud offering. My scenario involved the use of public containers to store large files (blobs) for a web application. Public containers are convenient because they can be accessed via a simple GET request. Unfortunately, being that simple begets some negative behavior. By being accessible via a simple URL, any user on the web can link to that file and/or download a copy for personal use.

If you are already using public containers, do not be alarmed as if your storage is entirely exposed. I tested my site by typing a URL in which I removed the file name and the result indicated that the URL could not be found. I immediately breathed a sigh of relief. In other words, even public containers do not act the same way that IIS would if the Directory Browsing setting were enabled.

Example URL: http://{ApplicationName}.blob.core.windows.net/{Container}/

Still, for cases in which public containers are not satisfactory due to their openness, the alternative is to use private containers.

Private containers are similar to public containers and remain fairly simple to use. They require the inclusion of a unique key during the GET request for stored files. This is extremely easy using the Azure SDK sample, which abstracts away the details of what must be included in the request.

Effectively, the container type determines where the request for Azure blobs come from. For public containers, the request comes from the client, because a simple URL fetches the file. In contrast, the request for private containers must come from the server. The server-side code embeds the key in the GET request, receives the blob, and processes it or delivers it to the client accordingly.

The obvious benefit to private containers being accessible only to the server-side code is that security logic can occur in it, thereby restricting who can access blobs to specific users based on rules. It also makes it much more difficult, but still possible, to download files stored in private containers for personal use. The drawback to this solution is that streaming now passes through the server, greatly increasing the bandwidth consumed by the application.

As described above, there are cases to be made for the use of both public and private containers. The best solution comes from weighing security requirements against bandwidth and development costs. Of course, there are ways to reap the benefits of both paradigms, but the above restrictions cover the “out of the box” web application scenario.

 

Let’s Raise the Standard of Security Knowledge

What is the best way to raise the standard of developer knowledge in the area of security best practices?

Security Skill Improvements

Photo by CarbonNYC

I ask because this is a particular pain point of mine. Personally, I must admit I am not where I should be with programming securely. However, I am definitely experienced enough to be able to spot obvious security issues in a software application. Not a month goes by, not a month, in which I do not stumble upon some basic security vulnerability in code I am maintaining or have to instruct a colleague why a particular implementation could be catastrophic. Do others feel this way about code I have produced? I hope not.

I practice some of the basics:

  • No SQL Injection Vulnerabilities
  • No Cross-Site Scripting Vulnerabilities
  • No storage of passwords in configuration files
  • No delivery of sensitive information in plain text

How can we make sure that any developer who puts new code into production knows these standards at a minimum?

I don’t want to have to teach someone again that in-line SQL is bad or that user input can’t be trusted. I don’t want to be able to look into a database and see actual user passwords strewn about. It’s not that I don’t enjoy teaching others about these things; I do very much enjoy teaching. It’s that I shouldn’t have to. There should be a minimum security skill set that any developer should have before getting paid to program.

My frustrations with this problem have been present for years, yet they have not led me to any solutions. How do we teach young developers about security? Assuming every company hiring entry-level developers had an orientation at which best practices were taught, it would still not be long before the next generation of hacks evolved and new security knowledge would be necessary. Which begs the next question, how do we all stay abreast of the most relevant security best practices?

As noted, I am not a security expert. However, I think I am often able to think about how someone could manipulate a system as I am writing code for it. Unfortunately, I tend to only notice these vulnerabilities because I am intimate with the code. My philosophy is always that if there is a vulnerability, even one that can only be known by fully understanding the code, it is just a matter of time before a motivated hacker would be able to find the exploit.

I know that I need to improve my skills. I need to be able to design software solutions to defend against security vulnerabilities. I need to innately understand secure coding tactics. I strive to be a competent developer in these areas. Where do I go to learn best practices without devoting my entire career to this expertise?

My preference would be to get regular (annual or semi-annual) training on the topics I need to improve or that most concern my industry. It would be great to be sent by my company for an uninterrupted session with security experts. Perhaps even better would be if I was able to work closely with a senior developer who was deeply experienced with security considerations. As I have said before, it is important to work in a job at which there are more experienced colleagues to learn from.

In my past experience, it seems that companies do not prioritize security enough. Sure, the boss may say that any new applications or modules must be “secure.”

The real problem, though, is that a lot of this was beyond developers’ abilities. Any reasonably sized company is going to have many developers who are good enough at writing code, but just do not have the security mindset.

From user “Dan Ellis” on StackOverflow.com

As developers, we must be pragmatic, finding the perfect balance between practicality and principles. In other words, if the boss says that an application must be secure, he or she is inherently making a tradeoff. The developer, with security as a requirement, must spend time researching what makes an application secure, how to make it secure, and then implementing the security. All this for features which are not obvious in the final application. Security features in a product usually go unnoticed (if done right) and tend to instead get deprioritized due to the pressures in the corporate world to write software on time and on budget. Additionally, developers are more likely to focus on things that they already know. Don’t you think the typical developer would be more likely to write “working software” on time with the thought that security could be added in later?

Of course this is a misguided approach, but who is going to be the catalyst for change? In my opinion, it is the responsibility of everyone involved in writing software to make sure it is secure. It is the responsibility of the company to ensure that secure practices are a part of the culture, that developers know security is a priority, and that developers are educated about best practices. It is the responsibility of the developer to ask appropriate questions about security and to raise concerns. The developer should also spend personal time learning about security vulnerabilities and how to defend against them.

I would have thought all the horror stories (e.g. here, here, or here) about software applications being hacked and security vulnerabilities causing chaos would be enough for companies to place a higher priority on security. It hasn’t worked, so I need help. What are the points of discussion to convince software development managers that this is a higher concern? Should I just tell them, “Hey, we need to pay attention to this if we don’t want to get sued?!?!”

Links:

Food for Thought:
One thing was pointed out to me from the DiscountASP.Net Knowledge Base that often times it is not a website’s security bug but instead that a developer’s machine was compromised and sites/names/passwords were scavenged allowing a hacker access to the hosted web application.

Herding Code Podcast #75: Barry Dorrans on Developer Security

The HaaHa Show: Microsoft ASP.NET MVC Security with Haack and Hanselman

Web Security Horror Stories (slideshow)