We are taught to revere the interface. As long as you build the right abstractions, we're told, how you actually store the data is an implementation detail that can be fixed up later. Just make a clean API that has a clear division of responsibilities, and you will be fine. I've heard this time and time again, particularly in reference to initial implementations of APIs that worked fine for N=100, but that we intended to "scale up later" to handle a thousand times that (it wasn't wishful startup thinking — we already had those use cases).
The problem is that for non-trivial applications, we often can't know what the "right abstractions" are until we understand our data at the desired scale. One of my favorite examples of this is Kafka, an open source messaging system originally developed at LinkedIn.
Most developers who have added features to edx-platform are familiar with
ModuleStoreTestCase. If your tests exercise anything relating to courseware content (even it's just creating an empty course), inheriting from this class will ensure that data gets cleaned up properly between individual tests. This is extremely valuable, but can also be wasteful in many situations. During last week's hackathon, I created a faster alternative called
There are good guidelines for error logging out there. They tell you to use timestamps, UUIDs, levels, categories, source line numbers, etc. All of these are great to know, but the most important part is thinking hard about who your audience is, and being able to imagine yourself in their shoes.
subscribe via RSS