3 months Ago
A big part of an agile programmer’s job is API design. In an agile project, the architecture is defined from on high only in broad strokes, leaving the fine details of component design up to the implementer. Even in projects that are specified in much more detail, implementers will still find themselves in situations where they have to design something.
This means that programmers in an agile team have to be capable of weighing the pros and cons of various approaches in order to avoid causing performance, scalability, maintenance or other problems as the API is used and evolves.
When designing an API, we consider some of the following aspects. This is not meant to be a comprehensive list, but should get you thinking about how to think about the code you’re about to write.
Even if you don’t have time to write tests right now, you should still build your code so that it can be tested. It’s possible that you won’t be writing the tests. Instead, you should prepare the code so that others can use it.
It’s also possible that a future you will be writing the tests and will hate you for having made it so hard to automate testing.
This is a very important one and involves how your application handles situations outside of the design.
Fail fast; enforce contracts
While we’re on the subject of error-handling, I want to emphasize that this is one of the most important parts of API design, regardless of which language or environment you use.
You haven’t tested the code, so you don’t know what kind of errors you’re going to get. If you ignore everything, then you’ll also ignore assertions, contract violations, null-reference exceptions and so on. The code will never be improved if it never makes a noise. It will just stay silently crappy until someone notices a subtle logical error somewhere and must painstakingly track it down to your untested code.
You might say that production code shouldn’t throw exceptions. This is true, but we’re explicitly not talking about production code here. We’re talking about code that has few to no tests and is acknowledged to be incomplete. If you move code like this into production, then it’s better to crash than to silently corrupt data or impinge the user experience.
A crash will get attention and the code may even be fixed or improved. If you write code that will crash on all but the “happy path” and it never crashes? That’s great. Do not program preemptively defensively in fresh code. If you have established code that interfaces with other (possibly external) components and you sometimes get errors that you can’t work around in any other way, then it’s OK to catch and log those exceptions rather than propagating them. At least you tried.
In the next article, we’ll take a look at how all of these questions and considerations can at all be reconciled with YAGNI. Spoiler alert: we think that they can.
 I recently read Erlang and code style by Jesper L. Andersen (Medium), which seems to have less to do with programming Erlang and much more to do with programming properly. The advice contained in it seems to be only for Erlang programmers, but the idea of strictly enforcing APIs between software components is neither new nor language-specific.↩
There’s an old problem in generated WCF clients in which the
You can see WCF Clients and the “Broken” IDisposable Implementation by David Barrett for a more in-depth analysis, but that’s the gist of it.
This issue is still present in the
To use your WCF client safely, you wrap it in the class defined above, as shown below.
If you can figure out how to initialize your clients without passing parameters to the constructor, you could slim it down by adding a “new” generic constraint to the parameter T in
 The code included in this article is a sketch of a solution and has not been tested. It does compile, though.↩
In a project that we’re working on, we’re consuming REST APIs delivered by services built by another team working for the same customer. We had a discussion about what were appropriate error codes to return for various situations. The discussion boiled down to: should a service return a 500 error code or a 400 error code when a request cannot be processed?
I took a quick look at the documentation for a couple of the larger REST API providers and they are using the 500 code only for catastrophic failure and using the 400 code for anything related to query-input validation errors.
4 months Ago
I’ve been asked for movie recommendations often enough of late that I thought I’d put together a bit of a summary of the mini-reviews I’ve made over the years.
The following list is not comprehensive nor does it necessarily comprise my favorite movies, though many favorites are here. I made the list with a particular couple of friends in mind and tailored it to include the movies I thought they might find interesting but had most likely not heard of or hadn’t yet seen. I also only chose movies that I’d documented as having seen in the last six or seven years.
One friend was specifically interested in what he termed “mind-f&@k” movies—movies with unexpected twists, that are bizarre or otherwise make you think. With this in mind, please also note that some of the movies are not for the faint for heart. If you don’t like it, turn it off, but don’t come complaining to me. You may, however, feel free to judge me for the base creature that I am for having enjoyed whatever it was that shocked you so. I revel in your judgment; I feed off of your indignation.
Movies are sorted within their group by release date in ascending order.
Each sub-heading in the “Details” corresponds to a list of movies I’ve seen and reviewed. The movies listed under that section are the ones I thought noteworthy in that list. Of those, I selected my top recommendations and collected them into the “Genres” section.
Directors and screenwriters that tend to deliver work that I consistently find intriguing and worth watching:
There are many more documentaries that I can recommend, but those are the top ones that are somewhat less US-centric.
Here are the lists of movies I found in my review articles. The link above leads to full reviews of the movies listed below it. IMDb will, of course, tell you what the rest of the world liked. Wikipedia will tell you what it’s about, probably with a detailed plot description that will ruin the movie.
These are from an older list that I kept before I started keeping more detailed notes.
I went through a bit of a David Lynch phase here, and he’s not for everyone but I kinda like him. He’s unique.
5 months Ago
Much of the Internet has been affected by the Heartbleed (Wikipedia) vulnerability in the widely used OpenSSL server-side software. The bug effectively allows anyone to collect random data from the memory of machines running the affected software, which was about 60% of encrypted sites worldwide. A massive cleanup effort ensued, but the vulnerability has been in the software for two years, so there’s no telling how much information was stolen in the interim.
The OpenSSL software is used not only to encrypt HTTPS connections to web servers but also to generate the certificates that undergird those connections as well as many PKIs. Since data could have been stolen over a period of two years, it should be assumed that certificates, usernames and passwords have been stolen as well. Pessimism is the only sure way.
In fact, any data that was loaded into memory on a server running a pre-Heartbleed version of the OpenSSL software is potentially compromised.
How to respond
So we should all generate new certificates, ensuring that the root certificate from which we generate has also been re-generated and is clean. We should also choose new passwords for all affected sites. I use LastPass to manage my passwords, which makes it much easier to use long, complicated and most importantly unique passwords. If you’re not already using a password manager, now would be a good time to start.
And this goes especially for those who tend to reuse their password on different sites. If one of those sites is cracked, then the hacker can use that same username/password combination on other popular sites and get into your stuff everywhere instead of just on the compromised site.
Though there are those who are blaming open-source software, we should instead blame ourselves for using software of unknown quality to run our most trusted connections. That the software was designed and built without the required quality controls is an entirely different issue.
An advantage of open-source software is that at least we can pinpoint exactly when a bug appeared. Another is that the entire codebase is available to all, so others can jump in and try to fix it. Sure, it would have been nice if the expert security programmers of the world had jumped in earlier, but better late than never.
The site OpenSSL Rampage follows the efforts of the OpenBSD team to refactor and modernize the OpenSSL codebase. They are documenting their progress live on Tumblr, which collects commit messages, tweets, blog posts and official security warnings that result from their investigations and fixes.
They are working on a fork and are making radical changes, so it’s unlikely that the changes will be taken up in the official OpenSSL fork but perhaps a new TLS/SSL tool will be available soon.
VMS and custom memory managers
The messages tell tales of support for extinct operating systems like VMS, whose continued support makes for much more complicated code to support current OSs. This complexity, in turn, hides further misuses of
Lots o’ cruft
This is all sounds horrible and one wonders how the software ran at all. Don’t worry: the code base contains a tremendous amount of cruft that is never used. It is compiled and still included, but it acts as a cozy nest of code that is wrapped around the actual code.
There are vast swaths of script files that haven’t been used for years that can build versions of the software under compilers and with options that haven’t been seen on this planet since before .. well, since before Tumblr. For example, there’s no need to retain a forest of macros at the top of many header files for the Metrowerks compiler for PowerPC on OS9. No reason at all.
There are also incompatibly licensed components in regular use as well as those associated with components that don’t seem to be used anymore.
Modes and options and platforms: oh my!
There are compiler options for increasing resiliency that seem to work. Turning these off, however, yields an application that crashes immediately. There are clearly no tests for any of these modes. OpenSSL sounds like a classically grown system that has little in the way of code conventions, patterns or architecture. There seems to be no one who regularly cleans out and decides which code to keep and which to make obsolete.
Security professionals wrote this?
This is to say nothing of how their encryption algorithm actually works. There are tales on that web site of the developers desperately having tried to keep entropy high by mixing in the current time every once in a while. Or even mixing in bits of the private key.
A lack of discipline (or skill)
The current OpenSSL codebase seems to be a minefield for security reviewers or for reviewers of any kind. A codebase like this is also terrible for new developers, the onboarding of which you want to encourage in such a widely used, distributed, open-source project.
Instead, the current state of the code says: don’t touch, you don’t know what to change or remove because clearly the main developers don’t either. The last person who knew may have died or left the project years ago.
It’s clear that the code has not been reviewed in the way that it should be. Code on this level and for this purpose needs good developers/reviewers who constantly consider most of the following points during each review:
Living with OpenSSL (for now)
It sounds like it is high time that someone does what the BSD team is doing. A spring cleaning can be very healthy for software, especially once it’s reached a certain age. That goes double for software that was blindly used by 60% of the encrypted web sites in the world.
It’s wonderful that OpenSSL exists. Without it, we wouldn’t be as encrypted as we are. But the apparent state of this code bespeaks of failure to manage on all levels. The developers of software like this must be better than this. They must be the best of the best, not just anyone who read about encryption on Wikipedia.
OpenSSL will be with us for a while. It may be crap code and it may lack automated tests, but it has been tested and used a lot, so it has earned a certain badge of reliability and predictability. The state of the code means only that future changes are riskier but not that the current software is not usable.
Knowing that the code is badly written should make everyone suspicious of patches—which we now know are likely to break something in that vast pile of C code—but not suspicious of the officially supported versions from Debian and Ubuntu (for example). Even if the developer team of OpenSSL doesn’t test a lot (or not automatically for all options, at any rate—they may just be testing the “happy path”), the major Linux distros do. So there’s that comfort, at least.