How Software Engineers Actually Use AI to Improve Productivity

Date: 2024-10-29 | create | business | tech | software-engineering | artificial-intelligence |

Over the past few years AI has improved immensely - giving us AI chat bots, image generation, and now widely accessible video generation. All this seems to point to AI improving exponentially which begs the question - when will it be able to rival or surpass us at our own jobs, potentially replacing us?

I don't claim to have the answers for the future of work but I do have some first and second hand experience / observations on how software engineers are using AI in their workflows so that's what we'll be examining in this post - where are software engineers finding AI useful and not useful in their coding workflows.

What do I mean when I say AI?

We'll be talking about "AI" a lot in this post but that is a super broad term. Artificial Intelligence is basically what we call anything that seems like it can solve new problems better than previous approaches and we haven't found a proper name for it yet.

Here when I'm talking about AI in the context of improving software engineer productivity, I'm mostly talking about widely available tools and their backing systems - usually LLMs today.

A non-exhaustive list of players includes:

  • OpenAI's GPTs
  • Anthropic's Claude
  • Microsoft's CoPilot
  • Google's Gemini
  • CursorAI

And of course the hundreds of wrappers / tool integrations that do similar things.

Where is AI useful for Software Engineers?

Over the last year there have been more and more frequent claims that AI will replace software engineers. Let's look at where AI shines when it comes to software development.

AI is good at answering natural language questions

TL;DR - AI gives good summary answers faster than traditional internet search

AI can accept broad natural language questions, infer intent from a conversation's historical context (like you asked ab x before y so you probably mean y wrt x), and translate from one language (like english) to another (like F#) making it much more efficient for answering many kinds of queries that come up in a normal day.

These answers of course are not always complete or accurate but they often give enough of an understanding about a subject to satisfy your curiosity or at least give you enough info to unblock yourself / ask better followup questions. Plus while AI gets a lot of heat for inaccurate answers I think we should realize that a lot of the content surfaced in Google searches is itself not peer reviewed and of dubious quality.

So here we can say that AI gives us answers that are roughly equivalent in quality to an average answer from a Google Search. But where it really shines is in doing this so much more efficiently than you could yourself.

In the previous state of the world if you had a query you would often need to do a few google searches, look at 3-4 articles / blogs / forums (like SO or Reddit) each to see what they said, then compile that data into a core understanding before trying to tweak that understanding to answer your actual question. This may take even longer in the case of competitive search terms as you have to sift through many keyword-stuffed results that look like they'll help but don't actually cover your topic / answer your question.

With these AIs: you can search all in one tool and get an ~80% reasonable answer to unblock yourself much faster.

Again the AIs are not always accurate or complete and they do not compare to someone with actual expertise but this info as at par with a typical search result and MUCH faster so that's what we should be comparing.

As an example: I use AIs to ask most questions I would previously go to Stack Overflow or guide sites (like w3 schools / mdn) or personal blogs for. Simple stuff like show me a list comprehension in Python or x error in dotnet or create a browser-based timer gives me a 90% answer in seconds whereas previously I would need to sift through good / bad answers on multiple pages and compile it together to get what I actually wanted.

If I had to guess I would say this speeds up most queries -> reasonable answer by a good 50% but simple references are often MUCH faster.

AI is good at helping you explore a space and come up with ideas

TL;DR - AI allows you to explore ideas and come up with related ideas faster.

Today's AIs have several nice features that when combined lead to powerful exploration and ideation tools:

  • AI is basically a search engine on the internet
  • It can process natural language questions - so even if you don't know an area and it's keywords, it can often infer what you're trying to ask about and give reasonable results
  • It's super fast at answering things

Together this means that you can ask the AI dumb questions that don't have any relevant keywords to an area and still often get back reasonable results. Moreover you can ask it followup questions about those results to quickly get an understanding of what these results are and why they're relevant to you.

Here's an example query I'm asking Claude:

Query: Provide some ways to deal with concurrency in F#

Answer: (shortened for brevity)

Some approaches:

* Async Workflows
* MailboxProcessor (actor)
* Tasks (async)
* Concurrent Collections (dotnet)
* Lock-based Synchronization
* Channels
* Semaphores

Q: Do you know how hard it would be to get a simple list of these through a traditional search engine?

I would basically have to hope that some person / organization has compiled this info and kept it up-to-date. Our best bets are probably Microsoft's docs (but those are big and hard to navigate) or someone's personal blog like F# for fun and profit or some very nice person on a forum like SO or Reddit (but those are both rare and often not kept up-to-date).

Then if you want to answer followup questions - you're again playing the slot machine of what kind of post someone has written and the algorithm decides to show you. But with AI I can get reasonably consistent results quickly, all in one place.

So while AI's info may be a bit dubious at times, all in all it's actually a better tool for answering human questions than basically any interface we've yet seen. In many ways it helps to unlock the power of the internet's information - it's been available for decades but now it is much easier to explore and parse.

AI is good at providing code references, drafts, and proofs of concept

TL;DR - AI can quickly provide examples of most coding patterns, allowing faster exploration of a given approach.

AI is good at giving reasonable summary answers quickly. This extends to code examples as it can pull from all those examples on the internet and provide you with a reasonable average of those.

This makes it super useful as a code reference tool as you can get your queries answered along with an example in seconds - and you can ask followup questions about it to tweak it at the design stage instead of the code stage.

Examples of queries I use a lot for this:

  • Language feature questions - try / catch in Python, Python list comprehension, F# class syntax
  • Small system example - F# mailbox processor that runs on a timer, simple time-based cache that refreshes itself

This ability can also be useful for writing some code boilerplate or minor refactoring as AI can write an example in isolation.

Some engineers report this being useful to overcome writer's block / laziness by giving them a lower barrier to just starting on something. Once they have something to start from they feel iterating on it is quicker / easier.

Where is AI not that useful for Software Engineers?

We've gone over several areas where engineers report productivity improvements from AI, now we'll go over some areas engineers have mixed or negative experiences currently.

AI is not great at coding full systems and features

TL;DR - While many developers find AI useful for coding up rough drafts, writing boilerplate, minor refactors, and code examples / references most also find it lacking in creating full features / systems.

A majority of these issues seem to arise around a lack of context on the domain / wider system. This means that AI can spit out a lot of code but often it ends up needing to be tweaked / modified to actually fit with the overall goal. Given the amount of tweaks needed sometimes this means a very long refactor process to get something usable out of it.

To be fair some of this lack of context could be solved by giving it full access to your codebase (which many orgs don't allow) or by crafting better prompts that gives all necessary info (but this is actually hard / a lot of work as evidenced by any sufficiently large project's Product Requirements Doc, alignment meetings, and product-focused staffing - if humans can't get it right, how can we expect AI to?).

This is basically the meme - in order for AI to replace us, clients would need to accurately explain what they want. So our jobs are safe.

Both of these seem solvable so AI software engineers are likely still possible but as the joke goes - we're 17 months into 6 months away from an AI software engineer. So we'll have to wait and see how long this actually takes.

As a counterpoint - some people have shared very positive experiences with the new Copilot 2.0 competitors like CursorAI and PearAI (and I'm sure a bunch of new ones I haven't seen yet). These people claim it's much improved their workflow and allowed them to code features much much faster.

On the other hand I've seen about as many people going the other way - turning these code tools off because they found it gets in the way more than it helps. So a bit of a mixed bag.

The main problems with code generation today seem to be:

  • Decent draft in isolation but misses key context requiring a large amount of rework
  • Because the AI generated all the code - it often takes a good amount of time to read through and understand it yourself. This means that some bugs / refactors / features that appear down the line are harder to fix / build because you don't have the initial context because you didn't write the thing. So short-term gain but potentially long-term loss due to rework.

AI is not an expert

TL;DR - While AI is good at summarizing a large breadth of domains, it lacks in depth in most of them

AI's ability to quickly give us good summaries of areas is often misconstrued as "intelligence". This intuitively makes sense cause shouldn't an entity with access to all knowledge on the internet be the smartest thing we've ever created?

The answer is yes it should be if it was a thinking entity. But currently LLMs are just advanced autocomplete so whatever "knowledge" is on the internet is what it spits out. Unfortunately a lot of the information on the internet is again of dubious quality so it just regurgitates that back to us - summaries of dubious quality.

Now all in all this is still useful as we've seen - often times we just want a summary of info on the internet and it is super good at that.

However this does mean that many of these answers are still not as complete or accurate as an expert in any domain could provide. In almost every domain if you ask deep cuts about smth and compare the answer with an expert you'll see that the AI is missing whole areas of information. If you then prompt the AI to include that area it usually can but it may have trouble coalescing these bits together.

This all ties back into AI being a search engine / autocomplete on information. It is good at the info that exists but it is bad at interpreting new "knowledge" out of that information.

This is kind of like the term - you don't know what you don't know. Experts are often experts not because they know everything but they know enough to realize where they don't know something so they can research it to fill in the gaps.

Today AI is simply not there yet. It has incredible breadth of knowledge but starts missing at any level beyond that where new knowledge needs to be created based on a combination of existing information.

For software engineers this often comes into play when you start going deep on tradeoffs between approaches. The AI is good at giving you textbook answers around microservices vs monoliths or htmx vs react but if you start diving into specific domains / usecases you have within the context of those comparisons you start to miss. Again I believe this is because this is "new" knowledge that the AI has not been trained on - there is no combo of your specific app x tech stack x domain on the internet so it doesn't have anything to reference.

It's here, in my mind, that software engineers / experts in any field are still necessary. Because they know what they don't know and can create net new knowledge on the fly.

I'm certain that AI will be able to do this in time but we're not there yet.

How much more productive are software engineers using AI?

Personally I've found AI most useful as a better search engine / reference tool. I can spitball ideas at it, ask it to explain and give examples for various approaches, then use that info to validate / iterate on my ideas faster.

All in all I would estimate AI improves my productivity by about 20%. It's most useful at answering my coding questions but answering questions is only part of the job.

Next

AI has become a tool I use daily as a software engineer. I currently have Anthropic's Claude pinned in my browser and often go to it first for answering my questions.

That said people's experiences with AI has varied widely. Some find it super useful while others say it hurts more than it helps. Moreover it seems like these experiences vary by technology and usecase which to me means these tools are still in flux and haven't fully found market fit.

Looking forward it's very likely these tools will improve not only as a search engine but also as an autonomous agent writing, testing, and reviewing code / ideas / new knowledge. Will it actually replace software engineers? Time will tell but it certainly will become a more useful and powerful tool for building software.

Let me know in the comments how you're using AI and what you've found it good / bad at.

If you liked this post you might also like:

Want more like this?

The best / easiest way to support my work is by subscribing for future updates and sharing with your network.