The Future of Building: The Ethics of Speed

The Future of Building is a series written by our CEO, Stephanie Gupana. The series explores how technical performance evolves when humans, systems, and AI share the same production loop. Each essay examines a different layer of performance, from cognitive to technical to organizational, and considers how teams can build responsibly in an AI-accelerated world.


Somewhere along the way, “fast” became a moral good and AI has turned that dial all the way up.

Code now helps write code.
Pipelines can trigger themselves.
A feature idea can skip the long corridor between “maybe” and “live.”

In tech, in business, or in life, speed is often treated as proof of intelligence.

We’ve built an economy that equates acceleration with evolution and often forgets to pause and ask what we might lose along the way.

With the rise of AI, what once took days of human iteration can now be generated in minutes, not always correctly, but almost instantly. (McKinsey & Company. (2023, August 3). The state of AI in 2023: Generative AI’s breakout year. McKinsey & Company.)

No question, collapsing the distance between idea and execution increases output, but it also removes the natural friction that used to surface error, debate, and restraint.

The future of software development may depend less on how fast we move and more on what we still choose to slow down for.

When Machines Outrun Meaning

Large Language Models (LLMs) are efficient. But…

…they do not understand context or implications, and they will never challenge you the way a Product Owner would. A Product Owner asks questions that preserve situational awareness, like “What problem are we solving?” or “What assumption are we treating as fact?”

Relying on automated output without questioning it invites automation bias: the tendency to trust machine suggestions even when they conflict with better judgment.

Human judgment in software development, especially the slower and more interpretive kind, is what keeps systems aligned with human needs.

When judgment is outsourced to automation, systems may perform technically but not intelligently.

LLMs generate solutions by identifying statistical patterns in their training data. They cannot evaluate or challenge whether a solution is appropriate because constraints, tradeoffs, downstream consequences, and human stakes all require forms of reasoning that exist only in the human domain.

The Moral Question Hidden in Metrics

Speed always has a cost. In this case, the cost is in carbon, human cognition, and the well-documented erosion of trust.

In a 2023 Study from Columbia Climate School, it’s estimated that the amount of energy needed to train an LLM can exceed 1,200 MWh and emit more than 500 tons of CO₂-equivalent in a single run. (de Vries, A. (2023, June 9). AI’s growing carbon footprint. Columbia Climate School.)

A single prompt may use around 3 Wh of electricity, roughly ten times the energy of a typical web search. (Epoch AI. (2025, February 7). How much energy does ChatGPT use?)

These are not abstract metrics; they reflect the hidden cost of treating speed as progress and the growing challenge of AI energy consumption. Recognizing these costs does not mean avoiding AI, but argues for more responsible AI oversight that keeps human judgment in the loop.

Adoption will continue to accelerate, especially as companies develop private models to protect proprietary data. The question is no longer whether teams will use these tools, but how they will use them in ways that preserve judgment, context, and accountability.

Our Open Source Learning Commitment

At Ruoom, we are publishing LLM prompting experiments with the goal not to replace human judgment but to show why it’s essential to keep human judgement in the loop. In the practice of documenting gaps, failures, and surprises in AI-generated code, we want to help teams learn how to prompt responsibly and evaluate outputs critically. By investigating how these tools behave and documenting what we learn, others can avoid wasted processing power and the environmental cost tied to it.

Since we are an open source software company, we run these experiments in public. Open source provides a low-risk space to observe how AI behaves before teams rely on it inside systems where mistakes carry real consequences.

What Comes After Fast

The future of building will not depend on slowing down for its own sake but on designing systems that know when speed stops serving the work. AI will make it easier than ever for software development to move quickly. Our job is to build the mechanisms that help us know when not to.


About Stephanie

Stephanie Gupana is the co-Founder & CEO of Ruoom®, an open-source software company. When she is not leading Ruoom, she runs Hi From Business Camp®, a performance coaching practice where she blends neuroscience, psychology, and evidence-based business strategy to help ambitious doers understand how their patterns of operating shape what they build. Connect with her on LinkedIn.


If you’re looking for more support, here’s how we can help:

Resources & Guides: Access helpful guides, tutorials, and code to build the tools you need for your business.

Open Core Access: Get free access to the core software and tools. Use them as-is, or customize them with coding skills or a developer. As your business grows, you can purchase plugins to unlock more advanced features.

Custom & Off the Shelf Software Solutions: We work directly with you to build custom software tailored to your workflows, datasets, and integrations.