Sneaky portability bugs

October 29, 2011

Porting a 500K+ LOC application to a new platform is an exhaustive process.

No amount of typedefs and #defines will save you from the inevitable crashes and memory corruptions that will occur when you first run your code.

Sure, good practices will save you a ton of time and QA effort in the end, but there are certain subtleties that even the best among us will miss when writing version 1.0.

I fixed a crash in some production code yesterday that reared its ugly head when compiled to target a 64-bit platform.

The bug itself resides in a relatively simple function which rotates or flips an image, in a section of code that calculates a per-row pointer increment when flipping an image vertically.

The offending code is shown below, see if you can spot the problem:

typedef unsigned int some_type

size_t bytespp = 3;
some_type width = 637;
ptrdiff_t incr = -1 * width * bytespp;

If you don’t see it right away we can clear things up a bit by removing the typedefs. Note that these declarations apply only to the x86 version (i.e., when the crash does not occur). The x64 version is further down.

unsigned int bytespp = 3;
unsigned int width = 637;
int incr = -1 * width * bytespp;

Now it should be more clear. We’re multiplying an unsigned type by -1 to get a negative pointer increment.

While definitely not the best way to go about things, it works on a 32-bit platform and never caused any problems.

The result is -1911, as one would expect.

However, moving into 64-bit land produces a different result entirely. The variable incr takes on a huge, positive value all of a sudden and we (luckily!) crash with a segfault.

So I fixed the bug as shown below and scratched one more x64 pain in the ass off of my list.

// yes yes, truncation may occur due to the 
// cast to a signed __int64 at the end, but 
// in practice we will never deal with images
// that have a width >= 2^63.
ptrdiff_t incr = -(ptrdiff_t)(m_width * bytesPerPixel);

That being done, I was interested to know exactly why the code worked in a 32-bit build and was completely borked under x64.

The answer is probably obvious to anyone who has a good understanding of type promotions and signed to unsigned conversions in C, but a deeper look proves to be an interesting exercise for anyone who may not see the answer right away.

For reference, here is the un-typedef’d version of the code when compiling for x64:

unsigned int width = 637;
unsigned __int64 bytespp = 3;
__int64 incr = -1 * width * bytespp;

Now, in order to really understand why this all works out in an x86 build we’ll want to take a peak at the disassembly.

The x86 version


This assembly is pretty straightforward, but we’ll go trough it step-by-step to make sure everyone’s on the same page.

	unsigned int bytespp = 3;
004D11C0  mov         dword ptr [ebp-18h],3
	unsigned int width = 637;
004D11C7  mov         dword ptr [ebp-24h],27Dh
	int incr = -1 * width * bytespp;
004D11CE  mov         eax,dword ptr [ebp-24h]
004D11D1  imul        eax,eax,0FFFFFFFFh
004D11D4  imul        eax,dword ptr [ebp-18h]
004D11D8  mov         dword ptr [ebp-30h],eax

 

004D11C0  mov dword ptr [ebp-18h],3

Move a value of 3 into the stack location for bytespp.

 

004D11C7  mov dword ptr [ebp-24h],27Dh

Move a value of 0x27D (637 decimal) into the stack location for width.

 

004D11CE  mov eax,dword ptr [ebp-24h]

Move the value of width into eax.

 

004D11D1  imul eax,eax,0FFFFFFFFh

Multiply eax by -1 (0xFFFFFFFF in Two’s Compliment) and store the result in eax.  

The mathematical result of this operation is 0x27CFFFFFD83, but as eax is a 32-bit register, it is truncated to 32-bits, i.e., 0xFFFFFD83. Obviously not logically correct at this point, but we have more to do.

 

004D11D4  imul eax,dword ptr [ebp-18h]

Now multiply eax (which is holding the result of the first multiplication) by the value of bytespp, which is 3, and store the result (0x2FFFFF889) in eax.

This results in 12884899977 decimal. However, as we noted earlier, the result of multiplying two 32-bit integers is itself a 32-bit integer, so when all is said and done the result is truncated to 0xFFFFF889, or -1911 decimal, which is correct.

 
This code has been merrily chugging along for years.  In a 32-bit environment it just so happens to all work out at the end and you can’t really blame the original developer for writing a bug that is relatively subtle.

Now let’s take a look at the x64 version and see exactly where it bit me in the ass sometime around 11a.m. yesterday morning.

The x64 version


	unsigned __int64 bytespp = 3;
000000014000111A  mov         qword ptr [rsp+30h],3
	unsigned int width = 637;
0000000140001123  mov         dword ptr [rsp+38h],27Dh
	__int64 result = -1 * width * bytespp;
000000014000112B  mov         eax,dword ptr [rsp+38h]
000000014000112F  imul        eax,eax,0FFFFFFFFh
0000000140001134  imul        rax,qword ptr [rsp+30h]
000000014000113A  mov         qword ptr [rsp+40h],rax

 

000000014000111A mov qword ptr [rsp+30h],3

Move 3 into the stack location for bytespp.

 

0000000140001123  mov dword ptr [rsp+38h],27Dh

Move 0x27D into the stack location for width.

 

000000014000112B  mov eax,dword ptr [rsp+38h]

Move the value of width into eax.

 

000000014000112F  imul eax,eax,0FFFFFFFFh

Multiply eax by -1 and store the result into eax.

It’s important to note here that eax is comprised of the lower 32-bits of the 64-bit register rax. because the operand types are int and unsigned int the result is a signed int. This means that no sign extension takes place because no type promotion occurs. The result is stored into eax.

Any significant bits past the 32-bit boundary are lopped off.  Thus, the 64-bit register rax looks like this; 0×00000000FFFFFD83.

 

0000000140001134  imul rax,qword ptr [rsp+30h]

Multiply rax by the value of bytespp.  This is the important bit!

In this instance one of the operands is an unsigned 64 bit value (bytespp is of type size_t, which is an unsigned __int64).  This means that the result is a 64-bit value, so the entire rax register is multiplied by the value of bytespp and the result stored into rax.

No sign extension takes place here due to the two operand types. You may expect that it would due to the fact that the first multiplication resulted in a signed int, but look closely at the disassembly. Our first operand is rax, a 64-bit register, and the second operand is bytespp, an unsigned __int64.

Due to the order of the three-operand multiplication we missed the boat for a sign extension. If only the order of the operands were swapped and the expression was:

__int64 incr = -1 * bytespp * width

it would have “just worked”. Because the first multiply instruction would have been -1 * bytespp a promotion to __int64 would have taken place, i.e., the result would have been sign extended and placed into the rax register. The next multiplication (rax * width) would have resulted in the correct result, but obviously you don’t want to rely on order of evaluation in an associative expression.

Anyhow, the multiplication results in this:

0x00000000FFFFFD83 * 3 = 0x2FFFFF889

Note that this is mathematically equivalent to the result of final multiplication in the 32-bit version, but in this case, no truncation occurs. As such, a value of 12884899977 is calculated as the pointer increment and, after swapping the first scan line of the image, we increment the pointer off into obscurity and (luckily!) crash with a segfault.

If the error weren’t so large the root cause may have taken longer to find. I love it when my code crashes early, I really do.

The Takeaway


This bug is not the fault of the original developer. Ok, it is, but I happen to know this person and he is very good at what he does. Really, if I’m half the engineer that he is in twenty years I’ll be more than content.

The fact is that it is hard to write code that will just work perfectly when ported from X-bit platform to Y-bit platform. No amount of typedefs will save you. Seriously, you will screw up somewhere, and that’s ok; QA professionals need jobs too, and God knows we pretty much suck at testing our own code.

On a side note, this code was not made any easier to understand by one of the typedefs being used. Two of them were necessary; namely, ptrdiff_t and size_t. These types change depending on your target environment and, well, that’s what typedefs are for.

The last typedef was not necessary and only served to confuse things. Using a typedef for the width and height of an image in this case was uncalled for. The type that the typedef maps to never changes. Don’t use typedefs solely to give your types different names if you never intend on mapping them to a new type. Seriously, I don’t mind typing unsigned int, especially when it makes my code clearer and easier to maintain.

I subscribe to the Linus Torvalds view on typedefs. They are not meant to be paraded about like a Weimaraner aiming for best in show. They serve a purpose, don’t abuse them.

Filler Stuff

October 6, 2011

I’ve been short on time over the past couple of weeks due to an increased workload in my day job and a pending move (well, we actually finished the move… in the rain… last night).

As I don’t like to let things stagnate too much, here is a great post by Ted Dziuba on the pitfalls of node.js, some tripe by the node.js’ers who completely missed the point, and a great followup post by Ted.

People also call me “narrow minded” at times.  I call it “being right and knowing it”.  Meh.

I now leave you with what I see when I take a right turn out of my new place.  Be back soon with something more substantive.

Broken Features Are Worse Than Missing Features

September 21, 2011

It’s about 8:30 p.m. and I’m at work waiting for some code to compile, so it seemed like a good opportunity to squeeze in a quick rant.  Obligatory XKCD reference:

Compiling

Anyway, today’s (tonight’s) rant is about broken software “features”.  A broken “feature” is a “feature” that, well, just plain doesn’t work, or at least one that doesn’t accomplish the goal it explicitly or implicitly claims to.

As I mentioned earlier, I’m at work, and I will be for the foreseeable future.  I’m hunting down a crash that occurs only on our 64-bit production builds.  The difference between a production build and a local build in my case (aside from the machine it is compiled on) is that we use Intel’s C++ compiler for our release builds.

The Microsoft VS2005 build compiled on my dev machine works great.  The Intel build crashes predictably after performing a certain action, though there’s a lot going on, so it’s anyone’s guess as to what the cause actually is.

First, I tried generating a MAP file in a vain attempt at a quick fix, hoping that the exception offset would give me enough information to reason out the cause if I could only get some insight into the general area of code that was causing the problem.

Unfortunately, I couldn’t find a correlation between the exception offset and the address of any function in the MAP file.  I honestly don’t know how this could be, but I don’t have time to sort that out tonight.  So, I decided to just move forward, build a release version with debug symbols, and attach a debugger remotely.

Or so I thought.  In fact, I can’t seem to build a release version with debug symbols, nor can I build a release version with full optimization (\Ox) enabled.  The Intel linker barfs all over itself, running out of memory at the end of a 20 minute build. Yay.

I did some searching and I found a few forum entries reporting this issue.  And lo and behold, I found the solution on Intel’s official forum.  What was it you say?  The solution, per an Intel employee:  “Don’t use \O3 (or \Ox), use \O2“.

REALLY?!  Intel’s advice is “Don’t use those two features of our compiler if you want it to work”?  How about this; don’t ship features that don’t work in the first place!   This is not the only instance of this issue, there are failure reports all over the web.

After taking Intel’s advice and dropping down to \O2, I find that \O2 doesn’t work either!  In order to get a release build with symbols I had to turn off optimizations completely.  Great, whatever, at least I have something I can work with…

Or so I thought… again.  Guess what?  The crash doesn’t occur with optimizations disabled.  Grrr….

After a bit more poking around I found that Intel’s compiler and linker executables are 32-bit programs, so even on my 64-bit monster-of-a-machine build server with gads and gads of memory, Intel’s compiler can only see 4GB of virtual address space (why does it need > 4GB or memory to compile my program with optimizations enabled while also generating a .pdb file?  No idea, but it does.)

Software developers, please take note; a feature is not a feature in and of itself.  It must provide some great utility to some moderately sized segment of your customer base.  If your feature doesn’t work, it ain’t a feature at all!  It’s just another drop down option in another tab page in another dialog box in another program that just plain don’t work.

Save us all some time and stress, just hide the damn thing until you can get it working reliably or strike it off your requirements list entirely.

I for one am turning on my psychic debugger while I go about instrumenting this code a bit, good night.

My New Beard Trimmer, and a Lesson in How Not to Treat Your Customers

September 15, 2011

I recently purchased the iStubble beard trimmer from Conair.  Aside from the obvious marketing ploy of adding the letter ‘i’ to the beginning of, well, any product in a lame and lazy attempt to make it seem hip and cool, it received rave reviews from just about every online store out there and I liked the idea of a swiveling head as I have always had a hard time trimming the area around my neck.

Since the last trimmer that I purchased cost little more than a 6-pack of Bud, I decided to splurge a bit and give the iStubble a shot.

After a week of using it I was easily pulling off the “I try really hard to look like I don’t try at all” look, so mission accomplished.  I liked the trimmer so much that I even pondered promoting it here, but considering the fact that I have all of five regular readers, 60% of whom are comprised of my Mother, Grandmother, and girlfriend, I decided against it.

That sort of article just didn’t jive well with the overall theme of this blog.  However, the events which have since come to pass have forced me to reconsider that decision…

Yesterday morning I was looking a bit shaggy, so I grabbed my iStubble and turned it on.  At least, I tried to turn it on, but nothing happened.  “Ok” I thought to myself, “I guess the battery is dead”, so I plugged it in and… nothing.  I found another outlet and plugged it in.  Nothing.

I took off the back to ensure that the battery didn’t somehow come loose, and it hadn’t.  I threw the back plate on again and, as I was running late,  decided to go to work looking a bit more unkempt than usual.  As I work in a systems engineering group this really isn’t a problem.  I left the trimmer plugged in just in case it needed more charging than it had in the past for whatever reason.

When I arrived home that night I checked the device once again.  I found that the back side, where the battery is encased, was nice and toasty from sitting on the charger all day.  I tried to turn it on again; nothing.  Now I’m getting pissed, so I made a mental note to call Conair’s customer service department the next day.  Here’s how that conversation went:

Rep: Hello, how may I help you?

Me: Hello, I bought an iStubble two weeks ago and it will no longer turn on.

Rep: A what?

Me: An iStubble.

Rep: What’s the model number?

Me: I don’t know the model number.

Rep: I can’t pull it up on my system if I don’t know the model number.

Me: Well, you guys sell it and it is called the iStubble.

Rep: I know sir, but I can’t do anything without the model number.

Me: Ok, well the product is listed on your website.

Rep: It is?

Me: …Yes.

Rep: <Silence>

Me: Ok, I guess I’ll look it up for you, one second… Ok, the model number is GMT-900R.

Rep: Ok, let me pull that up.  My computer is slow, so please tell me what happened while we wait.

Me: I did already, but I’ll explain again.  I have been using my iStubble for two weeks, I’ve probably used it a total of five times, and now it won’t turn on.  I’ve made sure that it is connected and charged.

Rep: How long ago did you buy this item?

Me: …Two weeks ago.

Rep: Do you have the receipt?

Me: No, I threw the receipt away after I opened and tested it.  I assumed that it wouldn’t break after two weeks of use (silly me I guess).

Rep: Ok, without the receipt you will have to send it into our Arizona plant where it will be examined and then we will determine if it is still under warranty.

Me: How could it possibly be out of warranty after two weeks?

Rep: All of our products are under limited warranty.

Me: Limited to less than two weeks?

Rep: I don’t know.

Me: So who pays the shipping charges?

Rep: Unfortunately you do.

Me: Well that is unfortunate.

Rep: I’m sorry sir, but I can’t seem to find this item in our system.  Are you sure that you gave me the correct model number?

Me: Well, your website says that I did, but so far this conversation has failed to inspire any confidence in your company.

Rep: Ok, well can you tell me the PIN number?

Me: I wouldn’t know where to find the “PIN number”, but I don’t have the device in front of me; I’m at work.

Rep: Ok, well you’ll have to call back with the PIN number.

Me: …Ok, goodbye.

So after that little runaround, I apparently needed some number on the trimmer in order to accomplish anything from the start.

That little nugget of information could have saved me some time and frustration, but that’s not the worst of it.  After dealing with this person I was under the impression that:

1. Conair does not stand behind their products (they offer a “Limited” warranty that apparently doesn’t cover the first two weeks of use).

2. Their service representatives have no clue what products they sell, and you have to do the legwork for them to find the model number (not the product name, the cryptic model number).

3. Their service representatives (at least, the one I wound up talking to) don’t really care that you are having an issue with their product.  At no time did I receive as much as an “I’m sorry to hear that”.

4. If your product is dysfunctional *you* have to pay to ship it to a service center before you are even told if they will replace it or not.  Of course, this is at your expense.

I for one will not be buying another Conair product after this little ordeal.  Oh well, live and learn.

The Perils of Automated Memory Management and the Law of Leaky Abstractions

August 28, 2011

In my day job I help to develop and maintain a relatively large software project which contains all of the control logic for a certain piece of hardware that my company produces.  This product is actually a conglomeration of various off-the-shelf hardware components which, when combined, form the basis of a time travel device which will certainly revolutionize the way we view and interact with the world.

…Ok, well, it’s actually a medical device used mainly by research institutions in the bio-pharma sector, but that’s neither here nor there.

As such, I am more of a low-level guy by trade.  Now I’m only 27, so all of you Cobol and FORTH guys still tweaking custom memory-mapped I/O routines for your PDP-11 can feel free to snicker at my lax use of the term “low-level”, but I live in the here-and-now when most “software engineers” don’t know the difference between a byte and a nibble and couldn’t troubleshoot a deadlock if their lives depended on it.

I enjoy tinkering with hardware and I like when my program crashes and burns because the library I called into corrupted the stack with a buffer overflow error 10,000 instructions ago and I have to figure out what the hell is going on.  Really, I enjoy this stuff.  It forces me to appreciate the intricacies of the hardware I am programming for.  So, when I was asked to assist a fellow co-worker with a user interface project last March, I had some reservations.

The Problem

Now I’m not some stodgy old programmer who is stuck thinking that C is for newbs, misses writing out programs by hand and translating them into punch cards, and thinks that debuggers are akin to Voodoo mysticism.  Like I mentioned previously, I’m a young guy and I’ve only been programming for five years now.  I’m not a “guru” of anything, and hell; I’m not even the smartest guy in my engineering group.  I work with guys that can code circles around me and it’s awesome.

I appreciate the abstractions and tools that make our collective lives as programmers easier.  I enjoy the fact that I can write a GUI for a test program in a matter of hours without having to delve into the nasty corners of Win32 land or busting out my COM reference material.  In particular, I think that .NET is a great technology created by many people who are assuredly more intelligent than I am, and C# is a top-notch language that could (should) certainly be used as a model for new programming languages to come.

However, I, like many programmers before me, have come to realize first hand that these abstractions are only as good as the underlying implementation, and the problem with that is, when the underlying implementation is poor or even slightly problematic, the abstraction loses its utility and causes more harm than good.

The Bug

Case in point; creating a BitmapSource object from a MemoryStream in C#.  We produce a medical imaging system and the GUI I am currently working on requests images via a TCP connection from the control application and they arrive as a big, Base64 encoded string.  In order to turn this string into an image that we can display to the user, we need to decode it and used the returned stream to initialize the BitmapSource that WPF controls want.  Simple, right?

Yes, it initially seemed so.  However, I noticed that, under a certain usage scenario, our application would crash due to an OutOfMemoryException.  Now this perplexed me a bit at first.  I am relatively experienced with C# and .NET, and I know how to handle resources, managed and native, in order to avoid this sort of thing.  It appears however that, in this case, my .NET-fu had failed me.

After smoking a few cigarrettes and bouncing a racquetball off the walls of my cubicle for a bit, I had a moment of clarity and realized that the BitmapSource images were not being cleaned up by the GC due to a simple bug; there was a particular event in an instance of (what is essentially) a global object which maintained a reference to the object which referenced the BitmapSource, which in turn prevented these things from being reclaimed by the garbage collector.  I added a few lines of code to ensure that we unsubscribed from the event before going out of scope and the OOM exception was licked… sort of.

Even though these images were now being reclaimed properly (a non-deterministic process of course, but acceptable given our application’s intended use), I noticed that the memory usage was about double what it should be when displaying these images.

Each image comes in at around ~3MB and, worst case, we display forty of them in a page of images.  While paging through these images I noticed that the application’s total memory usage was jumping up by ~240MB per page, not ~120MB as one would reasonably expect.  I realized that I could still cause an OOM exception if I went from page to page more quickly than the GC could compact the large object heap (LOH) these things were allocated on, so we still had a problem.

As the cause of this was not obvious after some more nicotine, a few more cubicle hand-ball sessions, and a short stint perusing the code, I decided to bust out my trusty ANTS .NET Memory Profiler by RedGate*.

I found that most of the “extra” memory being used was in the form of a System.Byte[], i.e., the underlying image data.  Here is the object retention graph for one of these arrays:

With a bit more digging, I found that this Byte[] belonged to the MemoryStream used to initialize the BitmapSource.  I had previously assumed that calling Dispose() on the MemoryStream (as anyone who is even somewhat familiar with .NET knows to do for any type that implements the IDisposable interface) would release *any* expensive resources associated with said MemoryStream, but this was not the case.

The BitmapSource maintained a reference to the MemoryStream internally, and setting the StreamSource property of the BitmapSource to null after initialization did not clear the internal reference.  So, the BitmapSource is maintaing a (hidden) reference to the MemoryStream, the MemoryStream is maintaining a (hidden) reference to the Byte[], and the GC can’t do anything about it until the BitmapSource is eligible for garbage collection.

The BitmapSource copies the Byte[] upon initialization anyway, so maintaing a reference to the MemoryStream is unnecessary and amounts to a seemingly poor design on the part of the .NET team (there may be a reason for this, but I couldn’t find one after an hour or so of poking around the source code using Reflector.)

Apparently I am not the first person to run into this issue.  I found this helpful blog post detailing the problem and then a followup post detailing a simple workaround.  I implemented the workaround, ran some tests, and verified that the memory usage of my application in this scenario was indeed cut in half as expected.

The Lesson

Perhaps I have gone into too much detail regarding the specifics of a problem and not the general message I am trying to convey, but I feel that a comprehensive background regarding my motivation for writing this post (as well as a real world example) was appropriate.  If you made it this far, congratulations; you have high threshold for tedium or are very easily entertained.

The moral here is; you can’t just assume that resource use is not an issue you will have to tackle in a managed application. In order to solve this particular memory related issue in the “if you even think about resource usage while writing your app you will be tarred and feathered as a premature optimizer” world of .NET, I had to previously understand:

  1. Which objects are allocated on the Large Object Heap.
  2. How often the LOH is compacted (you *can* run out of memory with almost no active allocations if the LOH is fragmented).
  3. When an object is and is not eligible for garbage collection.
  4. Implementation details of the GC such as GC roots.
  5. How to track the object lifetimes of your allocations so that they will be cleaned up as deterministically as is possible in this kind of environment.
  6. Implementation details of objects like our nasty BitmapSource to avoid unnecessary allocations and stale references.
  7. That events (i.e., instances of MultiCastDelegate) maintain a reference to all subscribers of said event.
  8. How to use a good profiler because you will *never* be able to reason out every subtle, hidden resource problem in a managed application!

This may come off like an overly harsh rant.  I can hear the collective voice of commenters now…

“But Ed; managed languages like C# save us as programmers an incalculable amount of time by removing the burden of manual memory management!”  

Well, yes, they do, except when they don’t.

When any of these wonderful abstractions break down you will have a much harder time finding the root cause than you would in a language like C or C++ as you simply don’t have a transparent window into what the hell is going on behind the scenes when your code is executed.

Everything is opaque, and unless you stay well within the happy path (which simply isn’t possible for any non-trivial application) you will certainly have to break through some layer of abstraction sooner or later.

Imagine this; you work in a C# shop with 5 other C# devs who, like you, teethed on Java in school, don’t understand how anything really works under the covers, and all of a sudden are faced with a subtle resource allocation issue that is causing a headache for some number of your largest customers who really put your software through its paces.  Ruh roh!  What to do now?  You don’t know, and they don’t know, because all of you have been coddled by layers of abstractions for your entire career, and this is where the problem really lies.

As most newly christened software “engineers” know very little about low level details and have even less experience working with them, old-school draconian guys with long beards are going to become extremely valuable as time goes on.  There will always be those interested in the complex, idiosyncratic implementations of the systems we all take for granted on a daily basis.  It has however ceased to be a prerequisite for entering the field.  Can you imagine an electrical engineer who couldn’t recite Kirchhoff’s Current Law as if it were their address? I can’t, yet this analogy seems to hold true in our profession.

Abstractions are only useful when you understand that which they are attempting to abstract away from you.  If you spend all of your time learning fancy APIs and the hottest dynamic-curly-brace-free-I-do-everything-so-you-don’t-have-to-know-anything programming language then, well, good luck.

You may be the coolest hot-shot rails developer in town, able to whip out a fancy web 2.0 site in a matter of minutes, but do you know how any of the function calls you make actually work?  Do you know how a TCP packet is structured?  Do you, without even thinking, know that using a collection type implemented as a linked list is probably a bad idea if you are performing a large number of inserts and deletes? Do you know why your database driven application slows to a crawl under heavy load? (HINT: your O/R mapper is probably not creating optimal database queries in all, if not most, situations.)

Everything is easy when you stay within the happy path and operate on a small scale.  But you as a programmer need to ask yourself; when things get hairy, do I have the chops to buckle down, break through the multiple layers of abstraction, and reason intelligently about the performance of my application?  If you can’t honestly answer “Yes” to that question then you may not be the awesome hacker you believe yourself to be.   Time to get back to the basics.

EDIT: In writing this post I realized that line breaks were not being inserted after the block quote above.  After flipping the WordPress editor from Visual to HTML mode, I found that a div was being inserted in place of line breaks in the HTML.  The Law of Leaky Abstractions strikes again, how appropriate.

*I do not work for RedGate nor do I receive any sort of reimbursement for mentioning their products, they simply make a bad-ass .NET profiler. 

Another Blog…

August 24, 2011

Yes, another blog.  I’m sure you’re very excited.  I don’t promise that this blog will rise above the existing masses of vacuous dribble found in every nook and cranny of the internet.  I don’t promise that reading any of my posts will make you feel better about yourself or the world in general.  In fact, you will probably feel worse in some way for the experience.  I don’t even promise that I will post on a regular basis.  I do however promise that:

  1. If something pisses me off, I will probably bitch about it here.
  2. If I happen to come across something so awesome that I can’t help but feel that the world should know about it, I will probably blab about it here.  This scenario is relatively unlikely in comparison to item one above.
  3. I’m a programmer, so from time to time I’ll probably drone on about idiosyncratic details of various programming related topics that only a small portion of the population will care to hear.

So that’s what you have to look forward to, assuming you come back… and also assuming anyone actually reads this to begin with.  A large assumption indeed.


Follow

Get every new post delivered to your Inbox.