Skip to main content

Thoughts on the 2024 ML, AI & Data (MAD) Landscape


Setting the stage #

I was catching up on the 2024 MAD Landscape1. This is the latest of Matt Turck’s yearly insights into the macro ML/AI ecosystem. It’s a great read – highly recommend!

As a software engineer navigating all the trends in the ML/AI space, a few of these trends and observations stood out to me. Some of these relate to the industry as a whole, some relate to changes in tooling, and some relate to promising areas of development in the near future.

Below is a combination of my learnings and interpretations about those trends that feel especially relevant. For much of the source information, credit goes to Matt Turck and the MAD Landscape write-up.

Unstructured data 🌪️ #

In the decade leading up to now, there was a lot of emphasis on structured data. As a consequence, there was a lot of work to be done around structured data pipelines, resulting in the Modern Data Stack (MDS). If the data can be represented in rows and columns, there’s no shortage of tooling that can help and the use cases are well-known.

However, there’s a lot of text, image, video, and audio content that doesn’t fit in that shape. Since the rise of Generative AI, a lot more of this unstructured data is being produced and consumed. Specifically, this content is used as input to train LLMs and is also the output of LLM inference.

Speaking from firsthand experience, we’ve come to rely a lot on the rich structured data ecosystem for building software. Now with unstructured data, we’ll need something similar for new applications. On the engineering side, we have an opportunity to build out this ecosystem, and possibly help define the best practices.

That said, I don’t think structured data is becoming obsolete. My guess is we’ll have a world with a healthy variety of tooling for both structured/unstructured data – it’ll just depend on the application which subset is used.

Modern AI Stack ✨ #

I’ve worked on several applications that depend on the MDS and there’s excellent support for the common use cases. For every layer in the data flow–from the data warehouse, to the ELT pipeline, to the data visualization–there are so many choices of products, vendors, best practices, etc.

Like the MDS, a Modern AI Stack is emerging around LLMs. There are now companies for vector databases (Pinecone, Chroma), monitors (Weights & Biases), frameworks (LlamaIndex2), and more in several other common domains.

For software engineers, I believe this will have two main effects. First, like the Modern Data Stack, engineers will need to learn about these various tools and how to integrate them into other applications. Second, engineers can actually work on these tools or even create new ones!

Not-so-large language models 🐜 #

While there are several LLMs for the new generative AI use cases, they present some hurdles for adoption. They showcase the capabilities of GenAI well, but to actually use them in applications is challenging – they can be expensive, slow, unspecialized, and closed. I’ve seen this with the GPT models and Claude myself. There’s noticeable latency when interacting with these larger models. Also, at some point, prompt engineering isn’t enough to overcome the unspecialized nature of these models’ training data.

An encouraging trend is the rise of small language models, which per their namesake, are smaller than LLMs. The size here is the number of params: LLMs have hundreds of billions, and sometimes even trillions! An SLM is an excellent alternative when an application needs speed or specialization. A lot of the new development here have been derivatives of the open-source models released, like Meta’s Llama models. Take Phi-23, for example, which only has 2.7 billion params but is able to achieve remarkable performance on the standard benchmarks. There’s also IBM Granite4 at 13 billion params, which performs very well in specialized tasks.

My impression is that application developers will have to become familiar with what problems are best solved by which models and be able to combine their outputs in a smooth way. Since SLMs often derive from open-source models, it can be easier to create a custom SLM for a particular business domain. That said, LLMs will still be useful, and there will be a hybrid approach where a mix of LLMs and SLMs are used. It’s good to have both LLMs and SLMs as options in the toolbox!

New and old AI 🔄 #

With the surge of Generative AI, nearly all of the AI work pre-2022 is now considered “traditional AI”. Although the term seems quite broad, the theme is that traditional AI revolves around structured data. However, while GenAI is the new trend, many of the use cases of traditional AI still exist, namely prediction, recommendation, and classification. These haven’t suddenly evaporated after 2022. Even in those areas where GenAI’s capabilities overlap, I feel like traditional AI methods have been a bit better studied and implementations can be easier/faster/cheaper.

It’s possible that GenAI-based models could evolve to outperform traditional AI models in their specialties, but not in the near-term. I feel like there’s again a hybrid approach where applications will use a mix of both kinds of models. Practitioners will need to understand how to solve the right problems with the right models, based on the relative performance vs. the costs.

The fragility of thin wrappers ⚠️ #

In 2023, many tech companies built around Generative AI but were delegating entire core capabilities to LLMs, leading to “thin wrappers”. Think of features like customer support chatbots or basic prompt relays from the user into an LLM. For each company attempting this, there were several more doing the same with a slightly different style. None of these had enough to differentiate themselves, and they could be overtaken by other companies or Big Tech.

This is where “thick wrappers” come in. To achieve staying power, companies should dive deep on a specific problem all the way from application development to the AI infrastructure. Owning the vertical on a problem domain both increases the barrier for competition and enhances the value delivered to customers. I believe engineers building LLM-centered products should be familiar with the whole stack (the traditional software stack along with the interfaces to LLMs) and should also be able to reason about the problem domain. This will give us the knowhow integrating classic techniques with GenAI in feature building, and also lets us practice using LLMs to create real business value.

Agent Gen AI 😎 #

The next interesting area for Generative AI is where AI can independently execute tasks. These sorts of AI agents could do multi-step things like find a flight, book a hotel, and reserve a rental car with a human just describing the end requirements.

The current state of LLMs and surrounding components isn’t quite there yet, but it seems like the logical next step. I think that application developers could benefit from considering agent-based AI for product ideas. It really does mesh well with how most human tasks are performed–the possibilities should be endless. Another exciting prospect is attempting to build AI agents will also unlock a lot of opportunities to work on tooling to streamline this process.

The price will eventually be right 💲 #

The cost of using LLMs and similar models has been steadily going down. It seems like this is mostly because companies providing similar models are often compared against each other, and price is one way to stay competitive. For instance, earlier this year, OpenAI lowered the costs5 of using GPT-3.5 and introduced a cheaper text embedding model.

I think the biggest effect this will have for practitioners is knowing generalizable patterns in which the various models can be used. If there are a few similarly-performing models and one shows a clear price advantage, then it might be worth it to swap out one model for another.

From a different angle, while a competitive price may be necessary, it won’t be enough of a differentiator for these models. It’s possible to end up needing to choose between similarly-priced models that attempt to solve the same category of problems. These might even perform about the same, but maybe one has a richer API or another has a wider community around it, etc. This is similar to using cloud providers, where they all generally have the same types of solutions, but are better or worse in other ways. As with cloud providers, I think each AI model will be evaluated on how well it fits the particular application or meets the business need.

SaaS – to be or not to be? 🤔 #

Many, many software companies in the last decade focused on B2B SaaS products. Generative AI has the potential to change how SaaS is implemented, likely in one of two ways.

The first is that because coding is easier, a smaller team can build new SaaS products. Initially, this will probably mean SaaS businesses become leaner. However, over time I think a lot more of these businesses might be created, because they will be easier to launch with less staff needed.

A second outcome is that AI agents could handle the common SaaS needs for a business. For example, there could be specialized AI agents for HR operations, accounting, and marketing. With enough of these, a business could simply automate these functions without buying software from SaaS companies.

SaaS as it is probably won’t be here to stay. I believe application developers will benefit from knowing how to integrate GenAI into the business functions it helps best. That expertise will either help folks accelerate the operations at companies they already work at or empower them to create new startups with just one or two people.

Closing thoughts #

It’s clear that GenAI is the main driver in the 2024 MAD Landscape and that it’ll have far-reaching effects. The potential for using LLMs in applications is great, but it’s still a bit early to say exactly how these trends will proceed. I’m very curious to see what happens!