1 month Ago

The West deigns to help Islam modernize

Published by marco on in Public Policy & Politics

In defense of Islam (3QuarksDaily) cites a same-named blog post by Ross Douthat (New York Times) in which he uses quite-dense prose to obfuscate the central message: he argues that the fanaticism of ISIS rises directly from Islamic scripture and shouldn’t be treated as necessarily crazy. The first step in ending a needless war is the recognition that the enemy is not crazy, but Douthat’s interpretation is more insidious, I think.

“Western analysts tend to understate not only the essential religiosity of ISIS’s worldview, but the extent to which that worldview has substantial theological grounding. It isn’t just a few guys making up a cult out of random bits of scripture; its political-religious vision appeals precisely because it derives “from coherent and even learned interpretations of Islam.” And we ignore the coherence of those interpretations at our peril: The Islamic State’s “intellectual genealogy” is intensely relevant to its political strategy, and its theology “must be understood to be combatted.””

He seems to argue, as others did before him—and as contemporaries like Sam Harris and Richard Dawkins do—that Islam is inherently violent. This is a spectacularly tone-deaf and unproductive line of argument that fails to recognize the unbelievable level or violence and religious justification for it employed by the society in which Douthat—and all of his tone-deaf contemporaries—find themselves. The U.S. military is an extremely religious organization. It would be much more honest to just argue that humanity is inherently violent, independent of religious affiliation.

Douthat goes on to soften his initial paragraphs to “give the rest of Islam credit for being, well, Islamic as well, and for having available arguments and traditions and interpretations that marginalized this kind of barbarism in the past, and God willing can do so once again.” This is a wonderful sentiment but how is poor Islam to control its worst elements when crusading Christianity is constantly whipping up fervor with its bombs and suffering? And then there is the underlying superiority of these argument in which Douthat talks of “synthesiz[ing] Islam fully with Western modernity” as if there were a natural progression of civilization in which the current Western one was clearly superior to anything else. If we were to dig through objectively, that may end up being the case, but assuming so a priori is not bound to be convincing to those over whom you are claiming superiority—especially when it’s not immediately obvious.

I happened to stumble on an older, unpublished blog post that I wrote in early 2009 that described the exact same attitude of the west toward Islam, in this case as channelled by the late Christopher Hitchens.

Hitchens on his friend, Rushdie (Jan. 2009)

Christopher Hitchens is an exquisite writer of English, as is Salman Rushdie. Rushdie’s books are, on the whole, amazing reads, and Hitchens’s articles as well, when he’s not raging too hard about some of his more deep-seated, right-wing opinions (the unending justness and rightness of the Iraq war being one particular such hobby-horse). Assassins of the Mind by Christopher Hitchens (Vanity Fair) talks of his relationship with Rushdie over the years and, in particular, the violent attitude that leads followers of a religion to heed a fatwa issued against a novelist.

It is interesting that the violence of Islamic extremists gets consistent copy from Hitchens while examples abound of similarly violent efforts at suppression by others as well. When he speaks of “a shadowy figure that has, uninvited, drawn up a chair to the table”, it is very easy to get confused and think he is talking of the Holocaust, when he is, in fact, speaking of the outbreak of violence one can expect when writing, saying or showing something that may be taken as offensive by extremist Islam. It certainly seems to be granting a tremendous amount of power to a population segment that is (A) much more often getting the shitty end of the stick and (B) on the wane for years now, were it not for the efforts of exactly the governments that Hitchens so wholeheartedly continues to support in their efforts to do the Crusades right, once and for all.

It is especially ironic that the publishing cycle brings this article to the light of day now, two weeks into the latest, ghastly Israeli steamrolling of Gaza, when international commentary has failed to bring any serious form of condemnation to bear against Israel for their having thrown themselves into an attack based seemingly exclusively on tactics that are war crimes. For many that see the problem, that “shadowy figure” of the Holocaust prevents them from speaking out as they would against any other nation that acted in a similar manner.

When Hitchens calls Iran “the prison house that is the Islamic Republic”, he is only partly correct because such a statement can only be an exaggeration with a nation that has such a rich culture. It applies far better to a nation like North Korea, for example. His heart’s in the right place, but his condemnation of an entire nation—the 127 writers he mentions excluded, of course—puts him dangerously close to having his opinion interpreted as an implicit assent to a regime-change in Iran, as has been sought by the same regimes that brought us the debacles in Iraq and Afghanistan, of which Hitchens still approves, though not unreservedly. So, he’s right, but a bit too enthusiastically and exhibiting a bit of the fanaticism that he so rightly condemns in the target of his ire.

“To indulge the idea of religious censorship by the threat of violence is to insult and undermine precisely those in the Muslim world who are its intellectual cream, and who want to testify for their own liberty—and for ours. It is also to make the patronizing assumption that the leaders of mobs and the inciters of goons are the authentic representatives of Muslim opinion. What could be more “offensive” than that?”

Quino Data Driver architecture, Part I: Applications & Sessions

Published by marco on in Programming

One part of Quino that has undergone quite a few changes in the last few versions is the data driver. The data driver is responsible for CRUD: create, read, update and delete operations. One part of this is the ORM—the object-relational mapper—that marshals data to and from relational databases like PostgreSql, SQL Server and SQLite.

We’re going to cover a few topics in this series:

  1. Applications & Sessions
  2. The Data Pipeline
  3. Builders & Commands
  4. Contexts and Connections
  5. Sessions, resources & objects

But first let’s take a look at an example to anchor our investigation.

Introduction

An application makes a request to the data driver using commands like Save() to save data and GetObject() or GetList() to get data. How are these high-level commands executed? Quino does an excellent job of shielding applications from the details but it’s still very interesting to know how this is achieved.

The following code snippet creates retrieves some data, deletes part of it and saves a new version.

using (var session = application.CreateSession())
{
  var people = session.GetList<Person>();
  people.Query.WhereEquals(Person.Fields.FirstName, "john");
  session.Delete(people);
  session.Save(new Person { FirstName = "bob", LastName = "doe" });
}

In this series, we’re going to answer the following questions…and probably many more.

  • Where does the data come from?
  • What kind of sources are supported? How?
  • Is at least some of the data cached?
  • Can I influence the cache?
  • What is a session? Why do I need one?
  • Wait…what is the application?

Let’s tackle the last two questions first.

Application

The application defines common configuration information. The most important bits for the ORM are as follows:

  • Model: The model is the central part of any Quino application. The model defines entities, their properties, relationships between entities and so on. Looking at the example above, the model will include a definition for a Person, which has at least the two properties LastName and FirstName. There is probably an entity named Company as well, with a one-to-many relationship to Person. As you can imagine, Quino uses this information to formulate requests to data stores that contain data in this format.[1] For drivers that support it, Quino also uses this information in order to create that underlying data schema.[2]
  • DataProvider: The data provider encapsulates all of the logic and components needed to map the model to data sources. This is the part of the process on which this series will concentrate.
  • ConfigurationData: The configuration data describes which parts of the model are connected to which parts of the data provider. The default is, of course, that the entire model is mapped to a single data source. However, even in that case, the configuration indicates which data source: Sql Server? PostgreSql? A remote application server (2nd tier)? With a high-level API as described above, all of these decisions can be made in the configuration rather than assumed throughout the application. Yes, this means that you can change your Quino application from a two-tier to a three-tier application with a single configuration change.

Sessions

So that’s the application. There is a single shared application for a process.

But in any non-trivial application—and any non-desktop application—we will have multiple data requests running, possibly in different threads of execution.

  • Each request in a web application is a separate data context. Changes made in one request should not affect any other request. Each request may be authenticated as a different user.
  • A remote application-server is very similar to a web application. It handles requests from multiple users. Since it’s generally the second layer, it will most likely have direct connections to one or more databases. In this case, it will probably be in charge of executing business logic, most likely in a database transaction. In that case, we definitely don’t want one request using the transaction context from another request.
  • Even a non-web client-side application may want to execute some logic in the background or in a separate thread. In those cases, we probably want to keep the data used there separate from the data or objects used to render the other parts of the application.

That’s where sessions come in. The session encapsulates a data context, which contains the following information:

  • Application: The application will, as described above, tell the session which model and data provider to use.
  • Current user: For those familiar with ASP.NET, this is very similar to the HttpContext.Current.User but generalized to be available in any Quino application. All data requests over a session are made in the context of this user.
  • Access control: The access control provides information about the security context of an application. An application generally uses the access control to perform authorization checks.
  • Cache: Each session also has its own cache. There are global caches, but those are for immutable data. The session’s cache is always available, even when using transactions.
  • ConnectionManager: Many external data sources have transactable/shared state in the form of a connection. As with data, connections can sometimes be shared between sessions and sometimes they can’t. The connection manager takes care of knowing all of that for you.

If we go back to the original code sample, we now know that creating a new session with CreateSession() creates a new data context, with its own user and its own data cache. Since we didn’t pass in any credentials, the session uses the default credentials for the application.[3] All data access made on that session is nicely shielded and protected from any data access made in other sessions (where necessary, of course).

So now we’re no closer to knowing how Quino works with data on our behalf, but we’ve taken the first step: we know all about one of the main inputs to the data driver, the session.

In the next part, we’ll cover the topic “The Data Pipeline”.


[1] The domain model is used for everything in a Quino application—not just the ORM and for schema-migration. We use the model to generate C# code like concrete ORM objects, metadata references (e.g. the Person.Fields.FirstName in the example), or view models, DTOs or even client-side TypeScript definitions. We also use the model to generate user interfaces—both for entire desktop-application interfaces but also for HTML helpers to build MVC views.
[2] See the article Schema migration in Quino 1.13 for more information on how that works.
[3]

This is code that you might use in a single-user application. In a server application, you would most likely just use the session that was created for your request by Quino. If an application wants to create a new session, but using the same user as an existing session, it would call:

var requestCredentials = requestSession.AccessControl.CurrentUser.CreateCredentials();
using (var session = application.CreateSession(requestCredentials))
{
  // Work with session
}

2 months Ago

Greg Proops on San Francisco

Published by marco on in Fun

Transcribed from Clasps:

“The beach in San Francisco is not like Brian Wilson’s endless-fucking-summer beach.

“Beaches in southern California, people go to ‘em and they’re surfing and they’re having picnics.

“Or if you go to a beach in Hawaii, people are playing volleyball or – what do you call those boards that you stand on and you have a stick and you look like a douchebag? What are those? – paddleboards. Yeah. Those are awesome, right? And people are out there doing that and shit.

“Anyways, in San Francisco, the beaches in San Francisco are dirty, the fence is crumbling, the concrete abutment is falling apart, there’s parts that are taped off with police tape and shit, the sand is repulsive and it’s not a beach that you go to to hang out all day in the sun and take your clothes off … it’s a beach [where] you put stones in your pockets and grouchily walk out into the water and say goodbye to this world. No one goes to the beach with a beachball and goes like this [vogues with beachball]. No. You go with a joint and a six-pack. And you sit there. And then when dark comes, you set fire to you car.”

Are you ready for ReSharper 9? Not for testing, you aren’t.

Published by marco on in Programming

We’ve been using ReSharper at Encodo since version 4. And we regularly use a ton of other software from JetBrains[1]—so we’re big fans.

How to Upgrade R#

As long-time users of ReSharper, we’ve become accustomed to the following pattern of adoption for new major versions:

EAP

  1. Read about cool new features and improvements on the JetBrains blog
  2. Check out the EAP builds page
  3. Wait for star ratings to get higher than 2 out of 5
  4. Install EAP of next major version
  5. Run into issues/problems that make testing EAP more trouble than it’s worth
  6. Re-install previous major version

RTM

  1. Major version goes RTM
  2. Install immediately; new features! Yay!
  3. Experience teething problems in x.0 version
  4. Go through hope/disappointment cycle for a couple of bug-fix versions (e.g. x.0.1, x.0.2)
  5. Install first minor-version release immediately; stability! Yay!

This process can take anywhere from several weeks to a couple of months. The reason we do it almost every time is that the newest version of ReSharper almost always has a few killer features. For example, version 8 had initial TypeScript support. Version 9 carries with it a slew of support improvements for Gulp, TypeScript and other web technologies.

Unfortunately, if you need to continue to use the test-runner with C#, you’re in for a bumpy ride.

History of the Test Runner

Any new major version of ReSharper can be judged by its test runner. The test runner seems to be rewritten from the ground-up in every major version. Until the test runner has settled down, we can’t really use that version of ReSharper for C# development.

The 6.x and 7.x versions were terrible at the NUnit TestCase and Values attributes. They were so bad that we actually converted tests back from using those attributes. While 6.x had trouble reliably compiling and executing those tests, 7.x was better at noticing that something had changed without forcing the user to manually rebuild everything.

Unfortunately, this new awareness in 7.x came at a cost: it slowed editing in larger NUnit fixtures down to a crawl, using a tremendous amount of memory and sending VS into a 1.6GB+ memory-churn that made you want to tear your hair out.

8.x fixed all of this and, by 8.2.x was a model of stability and usefulness, getting the hell out of the way and reliably compiling, displaying and running tests.

The 9.x Test Runner

And then along came 9.x, with a whole slew of sexy new features that just had to be installed. I tried the new features and they were good. They were fast. I was looking forward to using the snazzy new editor to create our own formatting template. ReSharper seemed to be using less memory, felt snappier, it was lovely.

And then I launched the test runner.

And then I uninstalled 9.x and reinstalled 8.x.

And then I needed the latest version of DotMemory and was forced to reinstall 9.x. So I tried the test runner again, which inspired this post.[2]

So what’s not to love about the test runner? It’s faster and seems much more asynchronous. However, it gets quite confused about which tests to run, how to handle test cases and how to handle abstract unit-test base classes.

Just like 6.x, ReSharper 9.x can’t seem to keep track of which assemblies need to be built based on changes made to the code and which test(s) the user would like to run.

 Abstract tests cannot be executed Here are the concrete test instances

To be fair, we have some abstract base classes in our unit fixtures. For example, we define all ORM query tests in multiple abstract test-fixtures and then create concrete descendants that run those tests for each of our supported databases. If I make a change to a common assembly and run the tests for PostgreSql, then I expect—at the very least—that the base assembly and the PostgreSql test assemblies will be rebuilt. 9.x isn’t so good at that yet, forcing you to “Rebuild All”—something that I’d no longer had to do with 8.2.x.

TestCases and the Unit Test Explorer

It’s the same with TestCases: whereas 8.x was able to reliably show changes and to make sure that the latest version was run, 9.x suffers from the same issue that 6.x and 7.x had: sometimes the test is shown as a single node without children and sometimes it’s shown with the wrong children. Running these tests results in a spinning cursor that never ends. You have to manually abort the test-run, rebuild all, reload the runner with the newly generated tests from the explorer and try again. This is a gigantic pain in the ass compared to 8.x, which just showed the right tests—if not in the runner, then at-least very reliably in the explorer.

 Huge scrollbar in the unit-test explorer The unit-test explorer likes to expand everything

And the explorer in 9.x! It’s a hyperactive, overly sensitive, eager-to-please puppy that reloads, refreshes, expands nodes and scrolls around—all seemingly with a mind of its own! Tests wink in and out of existence, groups expand seemingly at random, the scrollbar extends and extends and extends to accommodate all of the wonderful things that the unit-test explorer wants you to see—needs for you to see. Again, it’s possible that this is due to our abstract test fixtures, but this is new to 9.x. 8.2.x is perfectly capable of displaying our tests in a far less effusive and frankly hyperactive manner.

One last thing: output-formatting

Even the output formatting has changed in 9.x, expanding all CR/LF pairs from single-spacing to double-spacing. It’s not a deal-breaker, but it’s annoying: copying text is harder, reading stack traces is harder. How could no one have noticed this in testing?

 Output formatting is double-spaced

Conclusion

The install/uninstall process is painless and supports jumping back and forth between versions quite well, so I’ll keep trying new versions of 9.x until the test runner is as good as the one in 8.2.x is. For now, I’m back on 8.2.3. Stay tuned.


[1]

In no particular order, we have used or are using:

  • DotMemory
  • DotTrace
  • DotPeek
  • DotCover
  • TeamCity
  • PHPStorm
  • WebStorm
  • PyCharm
[2] Although I was unable to install DotMemory without upgrading to ReSharper 9.x, I was able to uninstall ReSharper 9.x afterwards and re-install ReSharper 8.x.

The last offensive play by the Seahawks in Super Bowl XLIX

Published by marco on in Sports

The article The Corruption of Football by Joshua Sperber (CounterPunch) offers a far better analysis of the final offensive play by the Seahawks—the one that led to an interception—than I could ever have hoped to make.

Sperber starts off by dissecting Emmitt Smith and Dave Zirin’s supposition that foul play must have somehow been involved. That is, “[t]he theory of foul play of course presupposes that Carroll made not only the wrong call but a completely nonsensical one that could only be attributable to corruption […]”. The NFL is certainly not a shining beacon on the hill and it’s very easy to suppose that something shady was going on—especially when the alternative is to suppose that everyone in the Seahawks organization is a moron and that even a relative tyro like yours truly could have made a more sensible call—namely, to run the ball with Marshawn Lynch, as Seattle had done on nearly every other play that game.

Sperber makes a strong case that passing on that play seemed stupid only to people who weren’t actual students of the game.

“Seattle had three plays to score a touchdown and, with 26 seconds remaining in the game, lacked the time (with only one timeout) to run on all three plays. The only question was when the Seahawks would pass, and it was entirely correct to pass on second down, as a second down run, assuming it was unsuccessful (as was Lynch’s earlier 3rd and 1 run), would have dictated a third down pass forfeiting any element of surprise. By passing on second down against a Pats’ defense playing run, Seattle had an excellent opportunity to win the game. And the odds were astronomically high that the pass would have ended in either an incompletion (stopping the clock, as they needed to do) or the game-winning touchdown. In fact, this season it has been statistically riskier to run from the one-yard line than to pass.”

Ok, fine. But he could have chosen a better passing play, couldn’t he? That is a valid point, apparently. Here’s Sperber addressing the kind of passing play Seattle chose.

“Insofar as Carroll should be criticized, it is not that he elected to pass but that he should have called for a relatively safer pass, for instance either to the corner of the end zone or out of a bootleg. Nonetheless, the interception did not result from a coach’s decision but from a weak play on the ball by Seattle receiver Ricardo Lockette and a remarkable (and devastating if he had been wrong) gamble and play by Patriot rookie defender Malcolm Butler […]”

Seattle just got very, very unlucky whereas, at the same time, the Patriots got very, very lucky. Maybe Seattle had just used up all of their luck two plays previous where Kearse managed—against nearly all the laws of physics—to keep the ball bouncing only on himself and to finally catch it while lying on the ground.

If you want outrage, be outraged that “Seattle’s Jeremy Lane gruesomely broke his arm” and “Patriot receiver Julian Edelman stayed in the game after receiving a vicious and illegal (but not penalized) helmet-to-helmet hit”, neither one of which was even mentioned during the broadcast.

Instead, the NFL announcers conspicuously drew attention to the concussion that Seattle’s Cliff Avril sustained and for which he was removed from the game. Lane and Edelman’s injuries weren’t obvious, so they were quietly taken away, while Avril was blown across the field, his case could not be ignored and was instead taken as an example of the NFL’s focus on safety.