The following is an archive of the first week of daily quizzes which I have been sending to my colleagues. They are intended to be a exercises for .NET developers to test fundamentals and gotchas in the framework and C# programming language. I will be posting weekly quiz archives going forward.
Daily Quiz #001
Can you spot the potential deadlock in this code?
How would you work around this? What principles or lessons can we learn from this?
Say the client code attaches this event handler:
You may execute this code and it seems fine. In a single-threaded application this code will execute without deadlocking.
Now try running the FileMonitor in a background thread and marshalling the event handler code to you UI thread in a Windows Forms / WPF application.
When the UI thread accesses FileMonitor.CurrentFile it will try to acquire the synchronization lock around _currentFile. The event source has this lock and will not return until the event handler returns, which can't happen while the lock cannot be acquired - you are deadlocked.
Why doesn't this code deadlock in a single-threaded application? According to the C# specification (8.12):
"While a mutual-exclusion lock is held, code executing in the same execution thread can also obtain and release the lock. In contrast, code executing in other threads is blocked from obtaining the lock until the lock is released."
What is the lesson here? Never call unknown code from inside a locked statement block. Unknown code, by definition, could call anything within your data structures. It's foolish to try to design around this - avoid the problem by never calling unknown code while locked.
Unknown code includes events, any delegates (Action
Daily Quiz #002
You're reading a CSV file using the following methods, which returns a sequence of sequences of numbers:
You call the code like so:
The application throws an exception. Without compiling the code, where, and why? Assume the file exists and is readable and well-formatted.
We tried to read from the file after closing it. LINQ expressions are not executed eagerly, but are accessed on demand. Because the TextReader was placed inside a using block but the code was executed later, when we tried to access the contents of the file in our query expression the file was no longer available.
This was relatively easy to spot in this example, but what if we had returned the IEnumerable
There are several possible solutions to this problem. Consider encapsulating the TextReader resource in a single method - which method would you choose?
Beware of multiple enumerations - if you're going to enumerate over the resource more than once then be very careful about where you will dispose of it.
For more see Bill Wagner's More Effective C#, Item 42: Avoid Capturing Expensive Resources. More coming from this chapter soon.
Daily Quiz #003
1. How many allocations did I just make? For how much memory?
2. How many allocations? How much memory?
3. How many allocation? How much memory?
4. This time?
Note: by "allocations" I mean allocations on the heap. Stack space is comparatively cheap.
See comments for a slight correction on this answer.
1) One allocation for 8 bytes. structs are value types and are allocated in-line.
2) One allocation of 400 bytes (100 * sizeof(a)).
3) Three allocations, one for 12 bytes (assuming 32-bit pointers, 4 bytes per field plus 4 bytes for superclass pointer) and two for 8 bytes each (is someone able to confirm this? I couldn't find precise documentation in the spec. Email me!)
4) One allocation for 100*sizeof(c). However, each member of the array is null-initialized. Populating the array will take a further 100 allocations for a total of 101 allocations. In number 2 all members are allocated inline and default-initialized. Modifying these already-initialized members is much more efficient than allocating new heap space!
The important part here is the number of allocations, not the memory usage. Memory isn't the only consideration when deciding whether to use value types or reference types. There is a semantic difference and there is a major performance difference if you're allocating a lot of memory. Heap allocation is expensive. Inline allocation of value types is much cheaper.
Daily Quiz #004
What are the three main principles that GetHashCode() must ALWAYS follow? Why are these important?
1. GetHashCode must be instance invariant. Method calls on the object should not change the hash value.
2. Objects that are equal (as defined by operator==) must return the same hash code.
3. Hash functionn should generate a random distribution aceoss all integers.
Daily Quiz #005
I write the following code:
Why does it appear that the event is never raised? Hint: this one is quite subtle and involves code generated by the compiler.
When we declare the virtual event in Base:
public virtual event EventHandler MyEvent;
the compiler generates (roughly) the following code:
Note the private backing field for the event property. When Dervied declares the event override, (almost) the same code is generated in Derived. The private field (Base.myEvent) is now hidden.
Declaring the derived event means that the hidden backing field in Base is no longer assigned when clients attach to the virtual event, and there is no code in Derived to raise the new backing event field.
One possible fix is to override the event using property syntax:
The problem now is that only Base can raise the event. Derived has no access to the private backing field of the event and cannot raise it (just like client code cannot raise an event).
Another possible solution is to raise the event in a virtual method in Base:
But at this stage what have you gained by making the event virtual? You can achieve everything you needed to in your virtual event override in the virual method override.
Bottom line: avoid virtual events. It's not worth the hassle and there's almost always a better way.
Bonus: declare your events with an empty delegate to avoid having to check for a null event each time you call them: