Have You Reached a Relationship Mind Meld?
January 8, 2015 Leave a comment

Lessons Learned Building Enterprise Software
January 8, 2015 Leave a comment
October 13, 2013 Leave a comment
Too much of the literature about the Entity Framework contains directions for mapping directly to Stored Procedures. The way it is presented, it seems easy enough. However, attempting to implement this strategy beyond a simple example quickly becomes unnecessarily difficult.
Let’s say you would like to implement a Model First pattern whereby all the entities’ create, update, and delete (CUD) methods were mapped to Stored Procedures. For cases where the database already exists or where using Stored Procedures is a predetermined constraint of the project, this pattern enables your .NET code to mimic typical LINQ to Entities code. Ideally, the mappings would be mostly abstracted away into the Entity Framework definition file. Once mapped, developers would not have to pay much attention to this implementation detail.
Unfortunately, there’s a vast difference between what you can do with the Entity Framework in a simplified demonstration and the restrictions meeting you in practice. Below is a guide that covers the pros, cons, and restrictions of the “EF-to-STP” approach.
There are some advantages to using Stored Procedures. Mapping them in Entity Framework potentially produces the best features of both technologies.
When a database already exists that has implemented a great deal of logic in Stored Procedures, it can be reused.
Stored Procedures form a layer “underneath” the .NET code. Behind this abstraction layer, .NET developers do not need to understand implementation details.
Every Stored Procedure can have user-specific permissions.
Business logic implementation can be performed by a database developer whose skillset is strong in Stored Procedures.
Pluralsight.com has several video resources with tips about implementing the Entity Framework with Stored Procedures, including these three which I found helpful:
Below are some of the limitations mentioned in the videos:
For each entity, if any of the Create, Update, or Delete functions is mapped, then all 3 should be. For example, if you only map the Delete function to a Stored Procedure but at runtime your .NET code causes an Update to occur on that entity, it will throw an exception. It will not use the default Entity Framework functionality for the Update call.
You cannot map the parameters to a scalar value or a function. For example, you cannot map a Date parameter to DateTime.Now.
In contrast, Stored Procedures that return complex objects do not support change tracking.
In the last line of your Insert Stored Procedures, include the line “Select SCOPE_IDENTITY() as ID” so that Entity Framework can push the newly generated record ID back into memory.
The biggest issue with using EF-to-STPs is that the developer of the Stored Procedures probably did not realize the database would be consumed this way. To do it right, a great deal of consistency is needed. There must be 1 Stored Procedure for Insert, 1 for Updates, and 1 for Deletes per entity. The Stored Procedure developer may have optimized for different things, such as readability or a reduction in round-trips to the database, instead of for a standard pattern that can be used by a high-level Framework. The below tips can be helpful as a guide to developing Stored Procedures if they are not yet finished.
You must specify exactly 1 parameter per entity column for select queries. Similarly, for an update query, every column must have a parameter in the Stored Procedure, and you must map every parameter in the Entity Data Model. Therefore, Default parameters have almost zero purpose for CUD Stored Procedures in our scenario.
Output parameters in Stored Procedures used for CUD cannot be mapped in the Entity Data Model.
The Entity Framework can consume Stored Procedures that have table parameters. However, they require more of an ad hoc coding structure. They do not fit well into our EF-to-STPs pattern.
The Entity Framework can utilize Function Imports to return entities from Select Stored Procedures. However, the ability to include reference tables is not built-in.
Scenario: An Update Stored Procedure performs additional logic to update another table before updating the mapped entity’s table.
Outcome: The Entity Framework got confused by how many records were updated and rolled-back the update.
Scenario: A Select Stored Procedure uses IF statements and CASE statements to determine what Select query to use. Reading through the code, it’s difficult to detect, but one path returns Integers while another returns Bits.
Outcome: The Entity Framework generates metadata using one path and throws a cast exception at runtime.
The Entity Framework is a great tool, especially when it is used with LINQ queries to the database. While Microsoft and its partners describe how to map the Entity Framework to Stored Procedures, it only works in simple cases or where the Stored Procedure developer follows very strict standards. In most cases, it is not practical.
Still, the Entity Framework can very well be used to query Stored Procedures with a more verbose approach. It requires more code than EF-to-STP mapping described above. However, it typically requires less code than reverting to older technologies like ADO.NET. Just stay away from the mapping.
Disclaimer: Most of my experience in this area is with Entity Framework Version 4, but I believe it applies to Version 5 as well.
September 1, 2013 Leave a comment
3 Years ago, I wrote a blog post titled “I Heart Karnaugh Maps.” In it, I described a technique that can be used to reduce the complexity of Boolean expressions. I provided sample diagrams as I worked through the technique step by step. In one of them, I added an alt tag of “Blank Karnaugh Map” to describe the starting point of the whole process. Little did I know that defining such a targeted search keyword would alter my perspective about Internet search traffic.
That blog post has always ranked in the top 20% of all my posts in terms of page views, largely due to the one keyword.
Among my blog’s highest ranking keywords are:
4th “Blank Karnaugh Map”
9th “Karnaugh Map”
12th “Blank Karnaugh Maps”
I first learned about Karnaugh Maps as a programming tool in my undergraduate studies at The Ohio State University. In Math 366: “Discrete Mathematics with Applications” I learned:
In order to refresh my knowledge before my blog post, I read an Electronics book. In neither the book nor the course do I remember a specific need for blank karnaugh maps or images of blank karnaugh maps. Therefore, as time passed, I was lead to wonder, why do so many people need blank karnaugh maps?
I had a few conversations with fellow software developers who were intrigued by my Karnaugh Map post. Apparently, Karnaugh Maps were drawing more interest than I expected and my site was getting found by people looking to know more about them. I decided to brainstorm ways I could leverage this interest into a software product I could sell. I figured people were already coming to my blog to find information about Karnaugh Maps and software development tips. Wouldn’t a product that uses this technology to improve code be useful to my readers? I decided to call it Logic Reducer.
By no means am I a good Internet marketer. However, I had recently signed up for the Micropreneur Academy and was learning the value of performing market research before beginning product development. I wanted to make sure that I could plausibly make a profit based on the number of potential users and competition. I used three high-level approaches in my research.
There are various methods to research how many people are looking for a particular topic online. I used Micro Niche Finder to determine that “Karnaugh Maps” was being searched about 1,600 times per month. While this is not a lot, I was encouraged by the quantity of searches of some of the longer-tail keywords and the relative ease with which my website could potentially rank for them.
In order to gauge the number of potential users for my product, I researched U.S. employment data. I estimated there were about 800,000 computer engineers and 200,000 hardware engineers in the United States. These numbers were very encouraging.
Using Google, I found several applications on the Web that had the features I wanted to build. Many of them were free. However, in reading related forums, it seemed like they often failed because they froze up or had a very limited feature set. Additionally, I did not find any on the Internet that were newer than 2006. Most of the applications were downloadable, thick clients. They were lacking the advantages of being Web products.
Karnaugh Map Minimizer on SourceForce gets .5K hits/day
Logic Minimizer 1.2.1
Karnaugh Map – minimalization software
In 2010, smart phone app stores appeared to still be growing rapidly. I was leaning toward making an iPhone app that could be used as a companion to someone writing software on a personal computer. I found a few apps that already existed:
KarnCalc $.99
Karnaugh Map Optimizer $.99
Logic Shrinker –Free-
While a few cheap apps already existed, I liked that there were not yet any iPad apps. Also, one review of an app stated that if Boolean Simplification (a feature I planned to develop) were included, he would pay 7 or 8 dollars for the app. My findings did not deter me from moving forward to the next step.
I liked that there were potentially many users of my idea. However, there were already very affordable ways to accomplish what I was considering my main value proposal. Therefore, I moved forward cautiously. I was optimistic, so I secured the domain name LogicReducer.com. However, I was concerned that there wasn’t enough market interest so I looked for validation of my idea.
I wanted to have a designer mockup my ideas so I could more clearly describe them. I got a quote from an offshore design agency for 4 screen mockups and 1 logo. It was going to cost $500.
I asked more people about what they thought regarding my idea. The general response was that people were intrigued by the product being a Boolean logic reducer. However, I also posted to the forums in the Micropreneur Academy and multiple people voiced some warnings. They felt it catered to too small of a niche, that I would not be able to gain enough revenue to make the project worthwhile.
I decided to stop working toward building Logic Reducer at that point. I was scared by the surprisingly high $500 investment for mockups and the concerns about the niche.
All in all, I did not spend very much time or money determining if Logic Reducer would be a good product to build, especially when compared to the time it would have taken to complete it and watch it fail. My blog content had exposed a tiny sliver of opportunity on the Internet. I researched that sliver and determined that I couldn’t make enough money from it for it to be worth my time.
However, I still think this is a good strategy for finding business ideas. Bloggers and content producers who have the ability to use analytics to see what topics are of interest to readers can use that knowledge to find problems in the world. Were I to find another surprisingly popular search keyword, I would research related business opportunities similar to how I did it before.
November 29, 2012 Leave a comment
As December nears, it’s time to brainstorm my New Year’s resolutions for 2013. The Stuller family is expecting its first baby in that time, so it’s entirely possible that any goal I set for myself will immediately seem implausible, thwarted by a new dependent and many personal misconceptions about the transition to parenthood. Still, naming my goals will be helpful, even if only the most important bubble up to the surface over the next year.
In Were my Microsoft Certification Exams Worth it I detailed my experience with these types of certifications. My conclusion has been upheld so far, that the certifications themselves do not provide much value once a certain level of experience is obtained. Therefore, I’ve fully abandoned the idea of updating or getting new ones.
“Any sort of certification by a tool vendor is worthless. Any certification created by a methodology proponent is also worthless.” – David Starr on Herding Code episode 150
Despite the quote above, I’ve decided to make Scrum Certification a goal for 2013. I feel I have a good grasp of iterative project management processes but I could benefit from structured training about a specific, standard methodology. I understand that the certification itself is not the end goal, but it is a nice motivation as a milestone of my learning.
“If you go for certifications, remember your goal is not simply to put more letters after your name but to maximize the value of the educational experience. Winning the game requires that you not only keep your eye on the ball but also anticipate what the next pitch will be. Historical evidence suggests that the average lifespan of any system is approximately 18 months, so the planning process for how you’re going to replace what you just built starts pretty much the moment you finish building it. Planning is a lot more effective when you know what you’re talking about. Being informed on emerging trends is a fundamental job responsibility, something in our business that needs to be done daily to keep up.” – 10 Essential Competencies for IT Pros by Jeff Relkin
Yesterday I read Paul Graham’s most recent post, How to Get Startup Ideas. This blog post really cut to the core of me, as it described the best ways to identify startup ideas. While I sometimes come up with ideas for products, they don’t occur to me as frequently as I’d like. Paul articulated what type of people have the most success, namely those who “live in the future and build what seems interesting.” So that’s what I’m going to strive to do. Throughout my career, I’ve done a pretty good job of solidifying certain skills, such as specific technologies (SQL Server, C#, jQuery) or communication (writing and public speaking). However, I’ve been hesitant to jump into new, trending technologies. For a long time, I considered it beneficial to isolate myself from fad technologies, figuring I can save time that way. In 2013, I’m going to try to both live in the future and build what’s interesting. Maybe that means working a little on a mobile app or maybe HTML 5. I don’t want to constrain my options by listing any technologies before the year even starts. If something seems cool, I’m going to come up with an excuse to build something with it.
I’m scheduled to wrap up my Toastmasters Competent Communicator certification by the end of this year (more on this in a later post). In 2013, I’d like to leverage the practice I’ve had toward some sort of speaking arrangement that advances my career.
As usual, I don’t just make goals for my career. There are also things I strive for in my personal life. Among those, I’d like to complete 1 big home improvement project (convert our half bathroom to a full or move my home office), get back in shape (how about a half-marathon), get involved (with my alma mater or our neighborhood).
What should I do with this blog? This is post 45, which means I’ve devoted over 40,000 words to it. My site visitors are steadily increasing and they are even stable when I take an extended break. However, when I started 3 years ago I thought I would have had more traction by now. I enjoy having a forum with which to express myself but a) I’m running out of content ideas and b) I’m losing motivation based on the slow traffic growth.
Traditionally, I tend to bounce back and forth between technical articles and more generic lessons based on personal stories. Which category speaks most to you? I’ve said everything I need to say from a self-expression standpoint, so when I continue to blog, I want to ensure I’m providing something useful for my readers.
Clearly, many of my ideas are half-baked. That’s partly because I still have a month to decide on New Year’s resolutions and partly because I have no idea what to expect of life with a child. Still, this post is important as a record of my mindset at this critical milestone in my life. It’s also an open invitation for discussion. What other goals or modern technologies should I be considering? How will a newborn affect my personal goals over the year? What type of content should I be producing?
Thanks for your time. You’ll be hearing from me again soon.
August 24, 2012 Leave a comment
Most people do not realize how much data is available on the web via APIs. Indeed, we .NET programmers tend to be a breed that ignores the trendy new data feeds that are available. Perhaps it’s because it is intimidating to try to interact with sites written in PHP or Ruby on Rails or maybe it’s because the only examples anyone ever shows are for Netflix or Twitter APIs (2 APIs that are not particularly useful for an Enterprise Developer). Now is the time to expand your horizons. As more and more data becomes available, the usefulness increases for all types of applications. I aim to broaden your awareness of the entire domain of public web services (APIs) and show you that
Before I dive head first into all the details, here is an outline of what I will cover and the basic steps involved:
At this point, you may be asking yourself, “why do I care about Data Feeds, APIs, and public Web Services”? You should care because it is the technology through which online companies share their data. If you think it might be worthwhile to someday automatically retrieve the weather forecast, stock prices, sports scores, site analytics, etc. and make logical decisions based on the data, then pay attention because the steps that follow are how you get started. A well-known example of a website using a public APIs is Expedia.com, which retrieves commercial flight and hotel information from multiple providers based on a user’s travel criteria. There’s very little stopping us .NET developers from gathering together multiple APIs in a similar fashion.
The first step of connecting to an API is to choose which one you will connect to. If you already know, you need to find out more information about it. To do this, I used ProgrammableWeb.com, an online directory of public-facing APIs. When I started the exercise for this blog post, I did not know which API I wanted to test with, so I just clicked on API Directory | Newest APIs. As tempting as it was, I chose not to use the Stack Overflow API, because it is already built in .NET and is therefore disqualified from this blog post. Instead, the API for the Khan Academy caught my eye.
In case you haven’t heard of it, Khan Academy is a non-profit organization that provides a wide range of training videos and courseware for free online.
The Khan Academy API is perfect:
By clicking the Khan Academy link in the Programmable Web directory, I was eventually taken to the Khan Academy API documentation site.
Many large websites have thorough documentation about their APIs. Still, there is a wide range of information that you may come across when researching them. Some have client-side examples in .NET and others even have 3rd party libraries (e.g. MailChimp) specifically written for them. Khan Academy has a nifty tool called their API Explorer, which allows you to click on different types of REST queries and see example responses. . I’ve seen similar tools on other sites too, such as Yahoo.
To start creating our .NET client application, we need to determine a sample query and retrieve response data. I’d like to generate local, .NET classes to consume the information sent back from Khan Academy.
There are a couple different ways of thinking about this:
If I know specifically what type of information I will be using, I can look for documentation on how to retrieve that narrow result set. In this case though, I want to start with as many classes as possible, to fill out my .NET solution with a large portion of classes.
The playlists/library/ query is great because it returns nested results. So, for example, it has information for playlists, with sub-information about videos, tags, etc.
Having a sample response like this is half the battle, and it’s not that difficult to get for REST services.
Once we know what sample query we are going to use, we continue to our 2nd big step, where we either paste a URL or Json results into a website named json2csharp.com.
This website converts the sample response data that we entered into .NET class definitions. With this step, we are letting the Json2CSharp website perform a significant step of the process for us automatically.
Why do we go through the effort to generate .NET classes like this?
Now that I have generated .NET classes, I will copy them into my Windows Clipboard (Ctrl-C) for later use.
Let’s keep things simple by creating a brand new Web Application. In Visual Studio 2012, select File | New | Project. Then select an ASP.NET Web Forms Application.
With the new application in place, let’s add the .NET classes into the solution.
First, add a class file to the project.
In this file, paste (the .NET classes that are in your Clipboard) over the default class. As a quick sanity check, you should be able to successfully compile the solution.
Json2CSharp sometimes struggles with ambiguous responses. As a result, it generates duplicate class definitions as is true in our case.
Still, it’s nice that the class generator got us part of the way toward our final code. Let’s massage our classes to remove any classes that have numbers on the end. Also, switch any reference to the duplicates back to the primary class.
Delete: Item2, DownloadUrls2, Video2, Playlist2, DownloadUrls3, Video3, Playlist3
Alter: References to Item2 -> Item, References to Playlist2 -> Playlist, Reference to Playlist3 -> Playlist
In order to deserialize Json results into our generated classes, we need to use another 3rd party tool named Json.NET. To add a reference to this library, we can perform either of 2 methods:
At this point, we’ve got the framework setup in our solution to store strongly-typed representations of the Khan Academy data. Next, we need to write the code to retrieve that data.
Here is the snippet I put in the Default.aspx.cs file to automatically retrieve the data and format it with Linq.
public static List<Playlist> GetKhanVideos() { var client = new WebClient(); var response = client.DownloadString(new Uri("http://www.khanacademy.org/api/v1/playlists/library")); var j = JsonConvert.DeserializeObject<List<Item>>(response); List<Playlist> playlists = new List<Playlist>(); playlists.AddRange(j.Select(i => i.playlist)); playlists.AddRange(j.Where(k => null != k.items).SelectMany(i => i.items).Select(i2 => i2.playlist)); playlists.AddRange(j.Where(k => null != k.items).SelectMany(i => i.items).Where(k2 => null != k2.items).SelectMany(i2 => i2.items).Select(i3 => i3.playlist)); return playlists.Where(p => null != p).ToList(); }
In our last step, we want to see the output of our query, so let’s leverage the drag-and-drop ability of Web Forms to easily visualize the data.
To see the new application in action, press F5 to run it.
With the help of a few 3rd party tools, retrieving and displaying any REST-based API in .NET can be easy. Not only that, but it’s going to get even easier. In Scott Hanselman’s ASPConf keynote, he showed an extension that is being developed by Mads Kristensen for Visual Studio 2012 that would eliminate several of these steps. It allowed an option in Visual Studio to “Paste JSON as classes,” thereby eliminating the need for the class-generation website. Microsoft realizes that the trend of creating and leveraging public APIs is not going away so they are doing something about it. And so should you.
Disclaimer: This product uses the Khan Academy API but is not endorsed or certified by Khan Academy (www.khanacademy.org).
August 1, 2012 1 Comment
I work at a small, but quickly growing consulting startup. At first, time-tracking was a significant pain for me and the owner. I spent a lot of thought trying to automate my personal time-tracking and had decent success using FogBugz and Paymo. However, no matter what I tried, I was still required to spend about 3 hours a month (1.5 hours per billing period) exporting my time records into an acceptable MS Excel format to be given to the client. I understand that there was even more work done by the owner, as he made sure every consultant’s format was the same, copy-and-pasted records into one huge spreadsheet, and invoiced the client based on this report. What a mess! That was time that should have been spent on client work, advancing the projects to which we were assigned and making more money in the process. Thankfully, after some brainstorming and research, our company standardized on Harvest for time tracking.
What we wanted was an affordable, centralized solution that could track time and enable invoicing for all employees at the company. Harvest has delivered even more than we thought we needed! Included with our monthly payment, we receive a mobile app, API use, and expense tracking.
One thing I like about Harvest is that it is definitely a modern-looking website that is continually being updated. The site is also intuitive and visually appealing. It was not long before we learned how to make an invoice online.
Simply put, Harvest saves us time. What used to take me 90 minutes at the end of each billing period now takes me 5. It’s also very easy to manage and create invoices. I think it’s fair to say it’s worth the money considering we keep paying the fee every month.
In addition to its advertised features, online time tracking provides insight and transparency into key aspects of our business:
Members of our team have used the Harvest Time Tracking app for Android and iPhone. It is pure icing on the cake. It has its limitations but it saves me a lot of time in 2 particular use cases.
I can easily keep my expenses organized with this app. The best feature is the ability to add expenses and to take a picture of any receipt as soon as I receive it. By making it so simple, it encourages the habit of inputting expenses almost instantaneously, reducing the likelihood of losing track of a receipt or forgetting about a meal. It can be humorous to see a few members of our team out to eat on a business trip as we all take out our phones to take pictures of our separate receipts.
Simply explained, the smartphone app gives me mobile access to my online time sheet. This is especially useful if I leave the office to run an errand or go home for the day but absent-mindedly leave my timer running. I can quickly take out my phone, open the app, and stop the running timer. It syncs with the Harvest server soon after.
The iPhone app does have some limitations. The key item I’d like to see improved is the ability to edit time entries (which is possible on the website). As explained in my most common use case above, if I leave a timer running I might remember to stop it while I’m on the go. It would be nice to be able to edit the time entry to change the end time to be earlier (when I actually stopped working). As it is now, I have to remember to go back and change that time entry the next time I’m in front of my computer.
I’m not yet a connoisseur of web APIs, but Harvest seems to have a good one as far as I’m concerned. As an experiment, I wrote a simple website in just a few hours to display whether or not I am working at any given moment.
I haven’t yet had the need for many of these but it is encouraging that Harvest time tracking integrates with many common software-as-a-service tools such as InDinero, Twitter, ZenDesk, and HighRise.
It should come as no surprise that I consider Harvest to be some of the best small business software I’ve used. It runs the core of our business and draws few complaints. It’s especially easy to bring on new consultants, requiring almost zero training. In that case we typically say something like “just use Harvest for time tracking.” And they do…
July 3, 2012 Leave a comment
This blog series has focused on simple changes that can be made to a .NET solution’s web.config in order to enhance the development environment, enhance security, and improve troubleshooting capabilities.
You can find previous posts here:
This is the 3rd and final post of the series in which we discuss ELMAH, short for Error Logging Modules and Handlers. I am definitely not the first to write about this, but it is such a useful tool that fits snuggly into the web.config that I had to include it in the series.
First, let’s explain what ELMAH is. It is an open-sourced component that can be easily added to a .NET project for the purpose of logging and notifying developers of unhandled exceptions. What is an unhandled exception? It’s an error in code that a web application cannot respond to, often resulting in a “Yellow Screen of Death.”
ELMAH does not, by itself, rid your application of a Yellow Screen of Death, a screen that causes much frustration among users of your application. Instead, it automatically logs the details of the exception, and the stack trace at the time of the exception, and it can even email the development team that something bad happened.
Using ELMAH has become the standard for any project that I work on. It’s just so darned useful for troubleshooting issues and doing great customer service.
Most of the time, users encountering an error do not immediately send an email to support. If it’s a public website, the user might get immediately discouraged and leave the site. If it’s an Intranet website, one that users must use to perform their jobs, then he or she might back up and try it again a couple of times before giving up:
With ELMAH in place, it is easy to short-circuit the workflow and keep users happy. You can begin to troubleshoot the issue before the user has even contacted the support team.
Hello [username],
I work on the support team for [name of web application]. Our system automatically notifies us when users run into an error that it does not know how to handle, and we are aware that it affects your ability to continue through the application.
We do not yet know exactly what the problem is, but are working to find out more information and resolve the issue quickly. I will let you know as soon as this is fixed.
In the meantime, it would help us to resolve this more quickly if you could tell me [what steps you were performing when this crashed].
Lastly, I know it is less than ideal, but you might try to [perform your job through this work-around or alternative solution] until I get back in touch with you.
Thank you,
[Nathan Stuller]
[Title]
Being proactive makes a serious impression on users (and bosses). I’ve used this technique before to reach out to customers about exceptions that they didn’t even notice. It reduced my stress level by confirming that it was a low-priority issue and also allowed me to engage with a customer about my product.
The first step is go to the ELMAH homepage. There you will find the 2 most important links to enable this setup:
There are a host of configuration options you can set to enable ELMAH to do exactly what you want.
I hope this 3-part blog series has helped you identify simple improvements that can be made to the web.config file. ELMAH, in particular, helps me delete my clients and since it is so simple to implement, to me it is a no-brainer.
June 19, 2012 6 Comments
As stated in an earlier post…
There are 3 things every public website should be doing with their web.config
In this post, we’ll discuss how to encrypt sensitive sections of the web.config so passwords and other information cannot be easily read by those who gain access to the file.
Encrypting sensitive sections of the web.config is important because they are just that, sensitive. Think about your production web.config file. It may contain all sorts of data that you would not want to be accessible. There are often passwords for SQL database connections, passwords to an SMTP server, API Keys, or critical information for whatever system is being automated. In addition, web.config files are usually treated as just another source code file. There are probably many versions in your source control system right now. That means, any developer on the team, or more accurately anyone with access to the source code, can see what information is in the web.config file.
In many cases, storing passwords in a web.config is itself unnecessary and should be avoided. However, we know it is all too easy to fall into the trap of placing them in this flexible, convenient file. Therefore, at the very least, certain sections should be encrypted so they cannot be easily read or used for evil.
In our example, we will encrypt two typical configuration sections: ConnectionStrings and AppSettings on a Windows 7 development machine.
Follow the below steps:
1. Open a command prompt with elevated, Administrator, privileges:
2. At the command prompt, enter:
cd “C:\Windows\Microsoft.NET\Framework\v4.0.30319”
3. Now enter the following to encrypt the ConnectionStrings section:
aspnet_regiis.exe -pef “connectionStrings” “C:\WebApplication1\WebApplication1”
In this case, C:\WebApplication1\WebApplication1 is the directory where our web.config is located.
4. Enter the following to encrypt the AppSettings section:
aspnet_regiis.exe -pef “appSettings” “C:\WebApplication1\WebApplication1”
For reference on all the command-line options of aspnet_regiis.exe, refer to this MSDN page.
Of course, it is possible you might need to be able to read the original, unencrypted, data at a later time. To access that information is easy. Simply perform the previous steps but use the command-line option “-pdf” to decrypt the important sections.
Deploying your web application with encrypted web.config sections is simple, but it may not be obvious. This StackOverflow answer explains the steps best. Generally, any server or development machine that uses the same encrypted web.config data must use the same RSA key pair, which can be exported using the aspnet_regiis tool.
There you have it. You have successfully encrypted 2 sections of your web.config file. Take a look below to observe the before and after results:
<connectionStrings>
<add name=“ApplicationServices“ connectionString=“data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\aspnetdb.mdf;User Instance=true”
providerName=“System.Data.SqlClient“ />
</connectionStrings>
<connectionStrings configProtectionProvider=“RsaProtectedConfigurationProvider“>
<EncryptedData Type=“http://www.w3.org/2001/04/xmlenc#Element”
xmlns=“http://www.w3.org/2001/04/xmlenc#“>
<EncryptionMethod Algorithm=“http://www.w3.org/2001/04/xmlenc#tripledes-cbc“ />
<KeyInfo xmlns=“http://www.w3.org/2000/09/xmldsig#“>
<EncryptedKey xmlns=“http://www.w3.org/2001/04/xmlenc#“>
<EncryptionMethod Algorithm=“http://www.w3.org/2001/04/xmlenc#rsa-1_5“ />
<KeyInfo xmlns=“http://www.w3.org/2000/09/xmldsig#“>
<KeyName>Rsa Key</KeyName>
</KeyInfo>
<CipherData>
<CipherValue>fK275KFHx9RKip16DTpwxLi4AHpvCpat4S3edgsDwco9PgudsMKc1qAyh9qNt2y+90qV4QIzyZXm8j27UV5J+R5rNruMUOROLWzVt8qkRYRM3ADoiCi5BJh2SsjE0guGXFbufZDgRpPFV5bstgZSBPYNiYXQF/aOLyQjPCE8VDo=</CipherValue>
</CipherData>
</EncryptedKey>
</KeyInfo>
<CipherData>
<CipherValue>CSdausUH7yWcY8t1sPUqiCooYreEauzi4t33gVJuWYcfhspsguTchJjwthUTMLqnulYRmCu8ZnhrVBepQo7PHO/4k5mwo3s46TsgFddvvUlyY/EDQf047LG0pocBDxL3MgIGf3b+atoG29Jg0Wnhj+M6urYG55Ko4nGp36JILQptlEn+sqCl2sQ99izykXtRWP7kC4tldO+YvBuZ7x8fyGoANwSKQFo7cH+dbydvCkRvaFQsRATdsQKGmSrXwIlkoNvxFb1CBPx0qDenyCs+vO4QyF2CZ8QB+UIJzA8EL7W/FovH5zDczjXQWTsFSmsI+vSojl9G9jSVLJFbwOpQBLIKxfximl5r</CipherValue>
</CipherData>
</EncryptedData>
</connectionStrings>
In a future blog post, we will discuss the 3rd party component ELMAH, which is vital to being notified when your users encounter exceptions in your web application.
Update 07/24/2012
It is possible to combine encryption with web.config transforms. I know this will work as I have done it before.
In my experience, I’ve done the following. I had to add an RSA section at the top of my web.config. For me, this went into my web.config.release as I did not encrypt my default/development web.config:
<configProtectedData defaultProvider=”MyRsaProtectedConfigurationProvider” xdt:Transform=”Insert”>
<providers>
<add name=”MyRsaProtectedConfigurationProvider”
type=”System.Configuration.RsaProtectedConfigurationProvider, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL”
keyContainerName=”NetFrameworkConfigurationKey_viternuscom”
useMachineContainer=”true” />
</providers>
</configProtectedData>
Also, the web.config sections need to be replaced at the section level. You cannot replace the name or connection string attributes. So, you can use something like the below configuration to replace the whole connection strings section in each environment:
<connectionStrings configProtectionProvider=”MyRsaProtectedConfigurationProvider” xdt:Transform=”Replace”>
<EncryptedData Type=”http://www.w3.org/2001/04/xmlenc#Element”
xmlns=”http://www.w3.org/2001/04/xmlenc#”>
<EncryptionMethod Algorithm=”http://www.w3.org/2001/04/xmlenc#tripledes-cbc” />
<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”>
<EncryptedKey xmlns=”http://www.w3.org/2001/04/xmlenc#”>
<EncryptionMethod Algorithm=”http://www.w3.org/2001/04/xmlenc#rsa-1_5″ />
<KeyInfo xmlns=”http://www.w3.org/2000/09/xmldsig#”>
<KeyName>Rsa Key</KeyName>
</KeyInfo>
<CipherData>
<CipherValue>fV3NsFhZR/l0/5nvioFfjwjhhauNUTR96fQOK3QeRTW05ERDAQrFGj9MBt5Jh7Ca4rIS2JZfOfNTjTxWiEp/tjk+9LXVyPKrJYMiNlYiUmZGfV/amPsLPmRm2pOEyKwJhJLN6NyZdht/xGrf1ClDKO6CG1ViA5pK5R8Db8X9ul4=</CipherValue>
</CipherData>
</EncryptedKey>
</KeyInfo>
<CipherData> <CipherValue>eFvSbAzbVUzwa9Sl8V6t43kuwAcvmaPUjboSJ/oi+MMJXyqtqXS8dKSuxBy+E0rC8tUxxIfJppZNm+CCoKf9Rm39vW2flpgcsvm8ZNMekSf4r2GWYAvLw3vYvMBcbnFRqktlaM7cXia38+3KGN8skHzxioqrBgy2QQqqPWIPrmrCS440BRlXEck6XwAO9rZOERgM6+OtlRan4EuGoB0O4acJWbp51zWxkfzqxMb600BHkYzeIYkHH8GNvWo+LSQt6o+NYW+Q7sm/lLFY5hPp3pGTOygXPehT1b/3BWZM+1dJ5sh8sBXO+t5m7/Dzqt4nvMqArmdEUvQdhYAPauC3Uj9HjDFpHkbOjVEzohIvB0kJ1Wc3uP4VvE6CRMbAsrRiSNLDlT6OpXYVrArLk9c1bBA56nFXPMxLEpN1umRcCfaQY0qxKrZi/yJ8dKD/C/5Vo7o50f10jM9eUrt3/uS71bNJk5U9N7kO42tZZGXZMui51o6MWcYxSC7VQ3KdCpy6UacBnD8MYr7EHeZ591ATQds8dzcsXY7w6Lsg1pXLK74HqXMW/xDeLtBoWJxat9y+</CipherValue>
</CipherData>
</EncryptedData>
</connectionStrings>
I hope this helps.
May 9, 2012 2 Comments
No matter how large of a project you are building nor how many lines of code you maintain, the most important file in your whole solution is likely the web.config file. It potentially contains connection strings, API keys, passwords, etc. If any of this information is incorrect you will likely see many problems in your application. Likewise, if a hacker were able to examine this file, it could mean disaster for your network. It is for these reasons that the web.config file must be treated with the utmost respect.
There are 3 things every public website should be doing with their web.config
Each of the above topics will be covered in a separate post. As for today, we’ll discuss #1. Visual Studio 2010 introduced web.config transforms, which make it dead simple to maintain configuration information for multiple deployment environments.
Imagine the not-so-rare network setup of a website that is deployed on a production server, a test server, and is run locally by developers. In the old days, it was difficult to keep track of all the different environment-specific configuration options. Maybe you set the web.config correctly once for each environment and just never overrode it during a new publish. Maybe you created your own configuration text files that were dynamically linked into the application. Either way you had to spend extra time to solve this seemingly simple problem.
Thank goodness for modern IDEs.
Now it’s extremely easy to setup multiple configuration files in your Visual Studio solution, with these additional benefits:
Since the web.config transformation technology has been around for about 2 years now, I’ll try and introduce a new spin on it by demoing this with the Visual Studio 11 Beta.
How to set it up:
<appSettings>
<add key=“apiKey“ value=“83ABC029538FED091ACDD”/>
</appSettings>
<connectionStrings>
<add name=“DBConnectionString“ connectionString=“Data Source=DBServer;Initial Catalog=DatabaseName;Persist Security Info=True;User ID=userName;Password=password“ providerName=“System.Data.SqlClient“/>
</connectionStrings>
<configuration xmlns:xdt=“http://schemas.microsoft.com/XML-Document-Transform“>
<appSettings xdt:Transform=“Replace“>
<add key=“apiKey“ value=“B153439AB3DE8FF9CA9D0“/>
</appSettings>
<connectionStrings xdt:Transform=“Replace“>
<add
name=“DBConnectionString“
connectionString=“Data Source=ProdDBServer;Initial Catalog=ProdDatabaseName;Persist Security Info=True;User ID=userName;Password=password“
providerName=“System.Data.SqlClient“/>
</connectionStrings>
…
Beautiful. Your app has been deployed with the appropriate environment-specific web.config settings. In the next post, we will discuss how to encrypt secure information that is stored in the web.config file.
January 6, 2012 1 Comment
In mid-December, I saw an ad on StackOverflow.com and was immediately intrigued. “Rock, Paper, Azure!” was a contest run by Microsoft wherein programmers design bots to compete in a modified game of Rock, Paper, Scissors. The bots had to be hosted in Microsoft’s cloud computing platform, Azure, so you can easily see Microsoft’s motivation to give away some small prizes to influence developers into trying and (hopefully) adopting Azure.
Although I had plenty of things to keep me busy leading up to Christmas, the Rock, Paper, Azure marketing worked on me. I figured I could take 1 or 2 hours out of my time and write the best algorithm I could in that time. Besides, I would be entered into the grand-prize contest drawing just for competing with even the most simple of bots.
I was immediately reminded of a school project from an early Computer Science course at Ohio State. The contest back then pitted “bug bots” from teams of students in the course against each other. Each team started out with a handful of bugs on a large virtual checker board. A bug could “convert” another student’s bug by facing it and issuing the “bite” command. The bitten bug would then become a member of the “biting” bug’s army. The game continues until one team has converted all bugs. If I remember correctly, there were only a few possible commands:
It may have evolved since then, but our bot did surprisingly well back then despite a very simple algorithm:
I’ve often wondered what additional strategy I would write into my bot if given another opportunity in such a competition. Rock, Paper, Azure was the challenge I was looking for.
Microsoft’s version of “roshambo” came with a few twists, such as the introduction of the dynamite and water balloon moves. Check out all the details and rules here. I liked that it was a simple game but with competition against other developers’ bots came many options for creative strategy. Additionally, I was extremely impressed with how simple it was to build the basic bot.
Game Rule Highlights:
It took me some iteration to come up with my eventual strategy, which turned out to be admittedly mediocre (98th place out of 162). I realized that my bot can keep track of the history of moves that it has made as well as the moves of my opponent. My plan was to try and detect if my opponent was falling into a sort of pattern. I was especially concerned about the end of the round when we both would be desperately throwing dynamite to close out the match. As you can see, my strategy only had a small amount of success.
Nonetheless, I thoroughly enjoyed my time creating and deploying my bot. I encourage Microsoft to search for more clever ways to get developers interested in learning and using their development platforms. In this contest, I got to expand my mind, learn more about Azure, and I even got a free t-shirt. Here’s to the next competition!