If You Are a Software Developer Using Ai You Are a Manager Now

The past 2 years have been a rollercoaster of emotions for software developers, watching in fear how the unstoppable forces of innovation in the form of LLMs began to target them and their expensive salaries in the ever-growing interest of current enterprises to reduce costs and exercise the capitalist imperative of increasing their margins.

For a brief moment in time, software craftsmen became the taxi and truck drivers, the miners, the average blue collar worker expected to be automated by some mechanical marvel of modernity.

However, most developers are still forced to deal with unhumanly large quantities of complexity and chaos in their jobs, and it is not getting any better.

At this point in time, you cannot build an application using just LLMs, but you can definetly build one with an experienced software engineer and an LLM.

In this article I will summarize my expectations for the future if AI keeps advancing and what will be its implications in software.

LLM as a search engine

If the purpose of a search engine is to aggregate and organize information around the web, an LLM could be defined as a better kind of search engine.

It is akin to a librarian that has read all the books in the library, can answer your any question you may have and point you into the right direction by selecting the right bibliography to solve your doubts.

In the pre-LLM era, computer people relied on information spread through a variety of forums, technical documentation assets, subreddits and stackoverflow threads to do their job.

You have a problem, you look it up, someone else had it and found a solution, which you promptly apply yourself, maybe adapted to your current use case.

If you are lucky enough to find some undocumented bug, you are completely by yourself unless you can talk with someone more familiarized with the technology.

Now you can ask an LLM about your problem, and it will direct you to the right results.

Sites such as perplexity.ia are implementators of this vision.

This is an amazing use case for this technology and will make you more efficient at your work by connecting you to the right information to execute.

But if LLMs where only a better version of a search engine, we would not be having this conversation.

LLM as a synthesizer

Most of the value that LLMs seem to produce is that they are a completely new kind of user interface for humans to leverage computation.

Up until now, if you wanted to leverage the computing power of your machine, you either:

  • Needed to know how to code or how to leverage some kind of automation framework.
  • Needed to know how to use code written by others.

The general abstraction means asking for something according to a specification, then receiving something that matches it as much as possible.

For example, an accountant may want to change all rows from a very large spreadsheet from one format to another or extract data from a PDF to put it in that spreadsheet.

Now he or she has a magic message prompt that may receive such data and execute a request uppon it instead of relying on a weird computer wizard.

Usually you need to refine that something again and again through different queries and adding your own expertise about the problem you are trying to solve, but eventually those LLMs can be used to craft something useful.

The more powerful a model becomes, the sooner you will get to a satisfiable something.

This something is generally represented as tokens, and those tokens can be anything, from natural language to code.

And this is where SEO people, advertisers, blog writters, content creators and coders may soon become critically endangered species.

We can assume that these LLMs are the worst models they are going to be and that a paradigm shift is needed.

LLMs as a specialized employee

There is a problem with these models, which is that they are expensive and not reliable in the same way as human consultants and junior software engineers are.

This means that if they do not know something, they will make it up in order not to look dumb or be fired, so anything they come up with needs to be systematically tested and reviewed.

In the industry these are called hallutinations, but to me this is pure traditional bullshit you may find in any office around the world.

If you have had employees or had to deal with contractors, they will usually save face by saying things that sounds plausible, but that is not precisely answering a question.

The delicious irony here, is that these models have learned so well how to execute human tasks that they have become as flawed as we are.

If you use LLMs as autonomous agents to generate information assets for different tasks such as code, testing or documentation, you will need to organize their output into a final product, you have effectively become a manager.

As any manager knows, a worker that needs to be supervised is a liability and a risk to the organization, which is what is keeping full scale LLM adoption at bay in information based businesses.

However, if the reliability problem is solved, this might kickstart the beginning of a productivity revolution through which software engineers have already gone through.

AI that can generate reliable machine code from specifications, is what is usually called compilers or interpreters.

Nothing new under the sun.

The right question to make is how to leverage these higher level compilers to solve current problems.

The present

I am currently learning how to use these LLMs to find new ways to create software, to navigate large repositories of information and to crunch ridicusly large traces, logs and compiler failures.

For me the most mature use case is navigating documentation and getting to a concrete answer from it adapted to a specific problem.

I have tried to build agents with self-hosted models, but have found them too unreliable to trust them as both their decision-making and results are stocastic.

Regarding code generation, I do not like code generation tools, I prefer a minimalist and reliable IDE for developing software with good linters and predictable autocomplete.

I also want to understand all the code I deliver at least once and have it well tested.

Some proponents of things like Copilot think that it is useful for boilerplate code. But as a programmer, I completely reject the idea of boilerplate code.

Boilerplate is a symptom of your unability to write good maintainable code. It is not real productivity.

If you understand how to write SOLID software, then boilerplate code should be non-existant in your code base because you should not repeat yourself.

If your language of choice is forcing you to repeatedly introduce the same sintactic structure again and again, you might use templates for that.

The future?

Let’s speculate a little bit.

Evolution

AI capabilities can evolve both in breadth and in depth.

Evolving in breadth meaning that become more general, this is the case for artificial general intelligence, and it is a traditionally hard problem to solve even with biological forms of intelligence such as humans.

Evolving in depth, means becoming more specialized, becoming better at processing certain specific kinds of problems, which is what most AI technologies have done, and I will assume that is the case with LLMs.

What happens if someone gets AGI?

If we eventually get AI systems with a high level of capabilities in both breadth and depth, then all bets are open, and it is sci-fi time for everyone involved.

Everything is possible after this point, either it is distopia or utopia, either we get Terminated or we begin a glorious age of enlightment and expanse over the vastness of the universe.

I will ignore this scenario and assume that LLMs will become highly specialized natural language processing tools.

What happens if we have access to reliable LLMs for specialized tasks?

With the availability of models that are able to provide specialist services in a sufficiently reliable manner, I think that human capital may go into two directions.

Professionals might scale up their operations in order to provide a fullstack product or service, which would be a complete revolution for the information economy in terms of productivity.

You would be able to have software as a service able to create fully fledge applications for well-known and solved problems and then specialize them for concrete use cases.

Think of WordPress and no-code apps but for any kind of software application.

Markets for human capital will be created at places in which attention or time is needed and where training data for those LLMs is not available.

I can think of markets that have constantly changing requirements, where technology adoption is too slow, or where vital key information is siloed away due to regulations.

I think that the best case scenario is that you own these models and the data they are trained with, so that you are enhanced by these specialist systems as if they were part of the Iron Man suit.

This is why I believe that iniciatives such as the TinyBox that provide consumer hardware that allow professionals to train and operate these models are so important.

Having access to these computing resources as literal hardware or through the cloud will be instrumental if you want to keep control of your productive output as a producer of information assets.

You will become, in a way, you own niche.

Conclusion

Probably the same thing that has happened in industrial society since its inception, human capital will be repurposed to use the new machinery in order to face the next hard problem humanity needs to solve to face it is constantly evolving and changing scalability issues.

There will still be a need for specialists, because someone has to create those specialist systems and really understand what is going on.

There will still be generalists, because someone needs to do the strategic and systems integration work to solve concrete problems in space and time.

I will leave it here as this article is already too long, and I am still figuring out where this hype train is going to end up.

No, software is not going away, if anything it will become ever more important as these systems become more deeply entrenched in our life, but it will change in ways that might be difficult to predict.

whoami

Jaime Romero is a software engineer and cybersecurity expert operating in Western Europe.