This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

The algorithm's purpose is to <i>guide</i> you, but to where?

Description

The pair of articles <a href="https://etymology.substack.com/p/survivorship-bias-and-the-algorithmic" author="The Etymology Nerd" source="Substack">survivorship bias and the algorithmic gaze</a> and <a href="https://etymology.substack.com/p/when-everything-becomes-a-fragment" author="The Etymology Nerd" source="Substack">when everything becomes a fragment</a> expresses, for me, a good argument for caution about the tools that you're using.<fn> AI is definitely a paradigm-shift for programming, but I think in a way that's not discussed very much. We focus very much on how AI enables people who couldn't program anything before to be able to program <i>something</i>. The scope of what it allows them to program grows with each version. Until it doesn't. That is, technically, the technology might be capable of more but it's also very limiting by its nature---tending toward attractors in the data---and also because of guardrails in the tools, over which we have nearly no control in for-profit, private models. In discussions like these, I've often used the example that Microsoft would be foolish if it were to make Copilot just as good at helping you in Java as C#. In fact, when you ask about Java, it should suggest you do it in C# instead and offer an example. How can you not see that this is where we are headed? How can you not see that this is where we almost certainly already <i>are</i>, at least to a degree? The algorithms and LLMs work together to absolutely shine a spotlight on an ever-shrinking fraction of available information. One of the article above includes the following, extremely illustrative graphic. <img src="{att_link}the_focus_funnel.webp" href="{att_link}the_focus_funnel.webp" align="none" caption="The focus funnel" scale="50%"> <h>LLMs in the context of other RAD tools</h> For example, when we're talking about POCs for stuff that's already been done---but not by <i>us</i>---then, OK, it gets you off the ground faster. Even then, you only get reasonably far if there is a relatively decent engineer guiding it; otherwise, you only get as far as the tool can go on its own---and your ability to "drive" it is limited. There has to be a rational problem-solver in the mix. Even with "reasoning", someone has to give the tool an indication of what "done" and "good" mean. Else, how would it know whether to keep working, whether to iteration again to "improve" the result? I think these are tools that can be used like DIY, which allowed people to begin making repairs around their home. I can replace a faucet with tools I buy myself but I'm not going to install a whole toilet. I probably could but I would <i>have to know what I was doing.</i> I just <a href="{app}view_article.php?id=5440">installed a new SSD into my 8.5-year-old iMac</a>. That's a task where most people would have either given up and bought a new machine or for which they would have had to go to a specialist. There's a DIY kit from OWC with a 25-minute video that allowed me (with my fastidious partner) to repair and extend the machine on which I'm writing this right now. Like DIY tools, AI-based tools enable more people to get into building software. This isn't the first time this is happening, though---not by far. <a href="https://en.wikipedia.org/wiki/VisiCalc">Visicalc</a> in 1979, <a href="https://en.wikipedia.org/wiki/Lotus_Symphony_(MS-DOS)">Lotus Symfony SHEET</a> in 1984, and <a href="https://en.wikipedia.org/wiki/Microsoft_Excel">Excel</a> in 1985 did it long before them. Subsequent tools like <a href="https://en.wikipedia.org/wiki/Delphi_(software)">Delphi</a> in 1995 kicked off <a href="https://en.wikipedia.org/wiki/Rapid_application_development">RAD</a> (Rapid Application Development), where many, many more people were empowered to create their own tools, with much-more specialized and professional UIs than they could with Excel. <a href="https://en.wikipedia.org/wiki/Microsoft_Power_BI">PowerBI</a> (2015) showed up to allow even more non-programmers to produce software.<fn> All of these were successful in their own way but none of them naturally led to maintainable, extendable, or even very usable software. Only in the hands of an engineer---someone with a flair or talent for translating logic from requirements to software UI/UX---would "good" software emerge. A lot of it was good enough for its purpose; it satisfied the requirements, especially when the goal was to produce a proof-of-concept or a one-off tool. <h>Where it goes off the rails</h> The problems begin when people forget or deliberately ignore the original scope of the requirements and start scaling up a product to <i>save money in the short term.</i> There is no reason to believe, given that we have such long experience, that AI tools will encourage people to build better tools or solutions than any of these other exemplars of the RAD revolution did. People with tools but no skills, talent, or knowledge will occasionally and seemingly accidentally build something passingly useful, but, if experience is a guide, most of it will be incorrect, unusable, unmaintainable, or a combination of all three. Despite this, so much of this kind of software burrows into a company's innards. They become essential, despite how much pain and inefficiency they end up causing, relative to well-engineered solutions, which have a much smaller TOC (Total Cost of Ownership). With LLMs, given that they're reinventing everything every single time, there isn't even much building on existing software going on. You're almost always starting fresh. Even when you have an existing codebase, you're shoving in as much context as you can---energy and cost budgets be damned---and telling it to "reason about it." This is an incredibly hopeful endeavor. It is rewarded just enough to encourage fanatics to keep going. It's more like gambling than programming. The fact that AI---and algorithms, in general---aren't deterministic makes them difficult tools for me to use for many things. I don't like the idea of having to pay 100%-focused attention to everything to make sure that I uncover the mistakes or the lies that are inherent to the tool. local, analysis-based tools might have had bugs but they weren't inherently stochastic. A search engine will also not deterministically return the same results. There was already slippage there. Wikipedia might have been edited since you last looked at it. Research relies on solid, unchanging citation sources. How do you do research, how do you build knowledge, when the sands are constantly moving about beneath your feet? <h>Where you should be extra-careful</h> You should be really careful about how you <i>innovate</i>. One of the main paradigm shifts in AI is that we've now moved from building stuff we can imagine to asking what we think the tool can build for us. Previously, we had local tools that told us what was possible---without filters---and we built stuff out of that. Now, we ask an online machine to filter the world's information for us. This can be a real time-saver, of course! But it can also eliminate possible solutions from our "gaze" (as noted in the section above). This might happen innocently and naturally, as the machine decides against telling you about something that it not unreasonably has determined is statistically irrelevant. But it might also be just actively blocking certain ideas, technologies, and techniques. It will almost certainly do so, in fact. It almost certainly is <i>already</i> doing so. When you ask for a web solution, it's going to use React and Tailwind. <h>FBD (Faith-based development)</h> People are being unreasonably hopeful about what these systems can do and how much information---of all of the available information---they're being presented with. They think that "it searches the web" now, or that "the latest information is being added". These hopes are based purely on faith. There is no incentive for these companies to emphasize actually utility and empowerment to and for you but to focus on addicting you to their technologies and then jacking up the subscription prices. Companies that focus on end-user utility over share price <i>go out of business</i>. There is no reason to believe that the AI tools that we have are not on an <a href="https://en.wikipedia.org/wiki/Enshittification">enshittification</a> track. Even the purportedly open-source ones aren't open-source enough to use without restriction---except for DeepSeek, which will probably be banned in Europe sooner rather than later. <h>"Luck" is a tool too</h> Serendipity plays no small part in innovation. It's mostly hard work, but there's always a kernel of luck, in which you had a good idea that was triggered by...what? If you only use tools that take you over well-worn grooves, where will you ever hear about something new? Or be inspired to think of something new yourself? <h>Old Man Yells at Cloud</h> And please don't bring the "you sound like an old man complaining about the new world passing you by" argument. You're better than that, aren't you? Of course you are! You're ready with well-reasoned arguments why this brave new world is better, for <i>what</i> and for <i>whom</i>. I'm not against anything generally; I just have questions that I would to have answered so that I know where I would use this tool. If the answer is "everywhere and for everything," then I feel that the bar is even higher, as I will then have even more trouble distinguishing an actual scam from your hand-waving and inability to express your argument convincingly. People are forever trying to waste your time, or commercialize your time, and you should be resisting it, and parceling out your attention very parsimoniously and carefully rather than just capitulating to whatever the algorithm or the ones shouting loudest tell you to. <h>Be your own algorithm</h> Perhaps I'm more resistant or ornery because I already control inputs with <i>everything else</i>. I choose the music that goes in my playlists; I choose whether to listen to the radio or a random source to learn about new things, to expose myself to previously unheard music. But then, when I hear something I like, I add it and possibly its album to a playlist that I then listen to later, rating the songs, which allows newer good stuff to trickle into smart playlists that I use when I want to listen to a shuffled playlist of stuff that I personally have considered to be good. There is no algorithm, except as a very controlled input rather than as the <i>only</i> input. I do this with news as well, generally following very specific video channels or blogs or newspapers with categorized RSS feeds (hundreds of them). My newsfeed is carefully curated but I also use Hacker News, Reddit, and newsfeeds from "mavens" to expand my palette and acquire new sources. As with music, I carefully control the algorithmic input. It's the same with movies and TV series. I make "watch later" lists and almost never just jump on what's being offered, unless I'd heard about it and was dying to check it out anyway. I hear about new things by following the blogs and channels of actual cinephiles (like <a href="https://vimeo.com/honorszombiefilms" source="Vimeo">Scout Tafoya</a>) rather than the marketing departments of movie studios. I sometimes use the curated-movie selection at Mubi or on all of the channels on my UPC to choose movies that I might be interested in, but I almost always add them to a "watch later" list rather than just being steered into changing my priorities right then and there by circumstance. <h>We already have most of these things</h> We should remember that many of the supposed miracles offered by these so-called AIs were already available before. You just had to pay attention and familiarize yourself with the tools. This obviously does not apply to <i>everything</i> that AIs can do, but I sometimes feel that the zeal for AIs stems more from not having understood what came before them. People are using them as search engines, for God's sake. That's like using the back of a screwdriver to hammer in a nail. But some of the "revolutionary" features being hyped are pretty banal when looked at in the sober light of the noon-time sun. For example, <a href="https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino" author="John Gruber" source="Daring Fireball">Something Is Rotten in the State of Cupertino</a> cites Apple describing a new feature, <bq><i>Onscreen awareness</i> — Giving Siri awareness of whatever is displayed on your screen. Apple’s own example usage: “If a friend texts you their new address, you can say ‘Add this address to their contact card,’ and Siri will take care of it.”</bq> You can already do this without Siri. The address has highlighted automatically when you pressed it---or hovered it on a Mac---for decades. From there, it's a right-click and "add to existing contact." This was not hard before. The only "improvement" is that you get the additional frisson that you're living in the future because you're talking to a device. It picks up the user from the context, you might say. Well, you <i>hope</i> that it does. Instead of taking a few seconds to select the contact---or verify that the contact selected by default is correct---you just <i>hope</i> that the address was added to the contact correctly. Unlike with previous features, it's not a bug when it goes wrong---it's just the stochastic nature of the tool. Unlike with previous features, you can't yet hope that when it worked one time, it will work the next time in the exact same way. If that guarantee is there, then where is the AI or intelligence? Where is the much-vaunted stochasticity that lent these tools their power? I <i>hope</i> that we're moving toward reliability with these tools---but I don't see that these companies, or most of their users, even consider that as a selling point. <h>What's the product and why is that better?</h> In a similar vein, a comment on <a href="https://old.reddit.com/r/gadgets/comments/1j9l7ii/all_this_bad_ai_is_wrecking_a_whole_generation_of/mheyna8/">All this bad AI is wrecking a whole generation of gadgets | We were promised multimodal, natural language, AI-powered everything. We got nothing of the sort.</a> writes, <bq>[...] <b>you can accomplish detailed tasks with much less effort than it takes to detail them to an assistant</b>, digital or physical. E.g. if I want to book a trip and have a travel booker app installed with my info saved, it legit takes me 30 seconds to book a flight, hotel, rental car etc and then I'm sure cuz I did it myself vs some janky ass AI doing it then me having for review it anyway to make sure it didn't fuck up. <b>The utility would be at the ill defined margins, in making judgments on fuzzy things.</b></bq> I think this is an important point that is borne out by a lot of anecdotal evidence that coding AIs are good for prototypes. What the commentator is describing is a sort-of prototyping of additional functionality for existing UIs. Once the value of the additional functionality has been determined, it can be converted to actual UI, which is more efficient to build, maintain, and use (rather than ad-hoc reinventing it with each query, as you do with LLMs). This is a common pattern: some tech starts off as software and, once a pattern has been established, migrates down to either FPGA-based solutions, or even then hardware-based solutions. Sometimes those hardware solutions are for slightly less-generalized hardware like graphics cards. Almost nothing starts out as a hardware-based solution. This notion of "virtualization during development" is already prevalent in industrial development, in which it's becoming ever more realistic to delay development of the actual hardware. It's acknowledged, though, that the ultimate goal is <i>still to eventually develop the hardware</i>. That's kind of the difference versus the AI hype: virtualization in industrial development is considered a tool that makes development of the end-product more efficient; it's not ever considered to be the end-result itself. Many AI vendors make a different argument, selling their tools as being for creating the end-product <i>directly</i>, rather than as tools to <i>help you build</i> the end-product. I'm not saying that everyone is making that argument and that no-one is making the "AI as tool" argument---there are enough examples of "coding assistants", for example---but that the loudest hype, especially from the more uninformed sources, are making that nonsensical argument. Unfortunately, this has a negative side-effect on the whole area of research. <h>Coda: use these tools where they're useful</h> Another comment on that same post as above---<a href="https://old.reddit.com/r/gadgets/comments/1j9l7ii/all_this_bad_ai_is_wrecking_a_whole_generation_of/mhgvmoo/">All this bad AI is wrecking a whole generation of gadgets | We were promised multimodal, natural language, AI-powered everything. We got nothing of the sort.</a>---writes, <bq><b>there are legitimately a lot of helpful applications of generative AI.</b> It's definitely a lot better than the NFT boom for example. Quick example: quickly <b>writing rough drafts of emails or helping you past writers block</b>, or generating quick images for ideating/brainstorming. For a lot of semi-technical questions (think high school or college homework-level) it can quickly solve a problem for you or run a calculation that isn't easily solvable with a basic calculator or google search so that you don't have to, <b>as long as you are knowledgable enough at the subject to check its work</b> (which is usually quicker than doing it from scratch). AI code assistants also speed a lot of people up. <b>It's far too reductionist to say that the entire thing with AI is BS buzzwords</b> even if gadget+AI from big tech companies hasn't worked out yet.</bq> Yes, a lot of people find it much more efficient to correct existing text than to produce their own text from a blank slate. This is especially helpful---or essential---when working in a nonnative language (which is a loooot of people) or when you're not even that solid in your native language (also a looooot of people). The hype train is annoying but there is real value here. You just have to dig through a morass of hype to find it. And you have to hope fervently that the actually useful parts don't get enshittified away in favor of the more short-term lucrative, hype-based profits. <hr> <ft>The following citations were of interest, <bq>The work in question, TV-Dé-coll/age No. 1 by Wolf Vostell, was created out of his concern for “the televisual environment becoming dominant” in communication. <b>The artist believed that video technology frames ideas in a way that robs people of their ability to reason; the canvas is therefore a tongue-in-cheek warning of how the medium affects the message.</b></bq> <bq>We view everything through an “algorithmic gaze,” forgetting the reality of how messages were brought to us in the first place (let alone how they’re contained). <b>This can affect how we ultimately construct our idea of reality.</b></bq></ft> <ft>Potential areas of application</ft> <ft>My own company Encodo (which I left in 2021) started work in 2006 on a product called Munjari that offered people the ability to organically design their own database schemas by defining relationships between domain objects. It wasn't described like that, of course, but we nudged people into the direction of deciding the shape of their data and various entities' relationships to each other. This was way back in the mid-2010s and nothing much came of it. We worked on it, on and off, for a few years, but the limitations of the web technology we were using at the time---<a href="https://www.java.com/en/">Java</a>, with the <a href="https://tapestry.apache.org/">Tapestry</a> web framework, backed by the <a href="https://hivemind.apache.org/hivemind1/">HiveMind IOC</a>---prevented us from really getting off the ground. There were some interesting prototypes, though, which have been lost to the mists of time. The underlying technology---in particular, the ORM and automated schema-migrator---would become Quino, which we ported to .NET and C# and is still alive and kicking today. It powers apps that are built in the classic way, by programmers, rather than clicked together by business domain experts in a web UI.</ft>