Does Language Matter in LLM Programming?

By Bayes

Listen to the Podcast

Does language matter when prompting an LLM?

Podcast brought to you by the hosts at NotebookLM

Abstract visualization of code in multiple languages

A fascinating experiment unfolded during a recent training session. Working recently with Spanish-speaking scientists and software developers, I encountered a compelling question: Does the language we use to prompt Large Language Models (LLMs) affect the quality of their programming assistance?

The Multilingual Surprise

Let’s start with something unexpected. Consider this prompt, which I recently gave to an LLM:

“Create a RESTful API service usando Spring Boot que gestiona el inventario de una biblioteca. Das System soll Spring Boot Best Practices demonstrieren, mientras mantiene una complejidad media suitable for demonstration purposes…”

Did you catch that? The prompt seamlessly blends German, Spanish and English—sometimes switching mid-sentence—and the LLM understood it perfectly, delivering a well-structured Java program with multilingual documentation. This isn’t just a party trick; it reveals something fundamental about how these models process language. (The full prompt expands to six languages, including Japanese!)

Breaking the English-First Assumption

The conventional wisdom is clear: since most LLMs are trained primarily on English data, English prompts should yield superior results. It’s a logical assumption, but it might not tell the whole story.

Think about how concepts are represented in an LLM’s vector space. When we discuss a Java microservice or a space probe’s CCD (Charge-Coupled Device), we’re not just dealing with words—we’re dealing with concepts. These technical concepts cluster together in the model’s understanding, regardless of the language used to access them.

The Vector Space Perspective

Imagine you’re looking at a star cluster through different colored filters. The cluster’s structure remains the same, even though each filter reveals it differently. Similarly, whether you ask about “garbage collection” in English or “recolección de basura” in Spanish, you’re pointing to the same region in the model’s conceptual space.

Where Language Choice Does Matter

However, the reality is nuanced. There are scenarios where language choice becomes more critical:

  1. Emerging Technologies: When discussing cutting-edge tech, English often has the vocabulary first. Try explaining “transformer attention mechanisms” in another language, and you’ll likely end up using English terms anyway.

  2. Technical Documentation: Since most major libraries and frameworks document in English first, prompts about specific API usage might benefit from English terminology.

  3. Code Comments and Naming Conventions: While the model can generate code with variables named in any language, international best practices generally favor English for maintainability.

The Spanish Team Experience

Working with the Spanish team revealed an interesting pattern. When developers prompted in Spanish about general programming concepts or algorithms, the responses were often just as sophisticated as their English counterparts. However, when dealing with specific error messages or stack traces (which are typically in English), including these in their native Spanish conversations didn’t diminish the model’s ability to understand and suggest solutions.

Nuance is the Key

Here’s an example of how the nuance of language can affect the quality of the response:

English prompt:

“Explain backpropagation in neural networks, specifically focusing on the vanishing gradient problem and how techniques like ReLU activation functions and skip connections help address this issue.”

French prompt:

“Expliquez la rétropropagation dans les réseaux de neurones, en particulier sur le problème du gradient qui s’évanouit et comment les techniques comme les fonctions d’activation ReLU et les connexions résiduelles aident à résoudre ce problème.”

The key difference emerges in the technical terminology:

  • “Backpropagation” is the original term coined in English literature
  • “Rétropropagation” is a translation that loses some of the technical connotations
  • Terms like “ReLU” and “gradient” have become loanwords in French ML literature, showing how English dominates the field

Here’s an even clearer example with legal technology:

English prompt:

“Explain how e-discovery tools use predictive coding and TAR (Technology Assisted Review) for document classification in common law jurisdictions.”

French prompt:

“Expliquez comment les outils de e-discovery utilisent le codage prédictif et la révision assistée par technologie pour la classification des documents dans les juridictions de common law.”

In this case:

  • “e-discovery” itself is often left untranslated in French legal tech
  • “TAR” has no standard French acronym
  • “Common law” concepts are inherently based in English-speaking legal systems
  • The technical workflow terminology comes from English-language software and procedures

The French version requires borrowing English terms or using approximate translations, potentially losing some precision in describing specialized workflows and technological processes that were originally developed and documented in English. This demonstrates how domain-specific technical language often carries more precise meaning in its original English form, though the core concepts remain understandable in translation.

A New Mental Model

Most of the time, however, it seems we don’t need to worry about this. Perhaps we need to update our mental model of how LLMs handle multilingual programming queries. Instead of thinking about translation (“the model translates my Spanish to English, processes it, and translates back”), we might better conceptualize it as direct access to a language-agnostic understanding of programming concepts.

Practical Implications

This has practical implications for global development teams:

  • Team members can interact with LLMs in their preferred language, mostly without fear of degraded results
  • Documentation can be generated in multiple languages while maintaining technical accuracy
  • Code reviews can be discussed in any language while keeping the code itself internationally accessible

Looking Forward

As LLMs continue to evolve, we might see even more sophisticated handling of multilingual programming contexts. Imagine models that can seamlessly switch between languages while maintaining perfect technical context, or those that can help harmonize codebases across international teams.

Conclusion

The next time you’re about to switch to English for that technical prompt, ask yourself whether the concepts you’re trying to access might be just as reachable in your native language. The key is clarity of thought, not necessarilythe language it’s expressed in.

What’s your experience been with multilingual programming and LLMs? Have you noticed any differences when prompting in different languages? The field is still young, and every developer’s experience adds to our collective understanding.


The author specializes in LLM applications in software development, recently working with the European Space Agency’s international development teams.