Designing the Future: Insights on AI and Software Architecture from WeAreDevelopers 2025

von Nesrine Doghri | 6. August 2025 | Architektur, Künstliche Intelligenz

Nesrine Doghri

Senior Developer

Attending the WeAreDevelopers World Congress this year was an inspiring experience, defined by a buzz of energy and innovation. Over three days, from July 9th to 11th, Berlin once again brought together thousands of developers, AI experts, and industry leaders from around the world, establishing itself as one of the most important software conferences in Europe. With over 15,000 attendees, 8,000 companies, 500+ speakers, and a massive Tech Expo, it felt less like a typical conference and more like a thriving community gathering, where ideas flowed freely.

The event spanned hands-on workshops, live coding competitions, technical keynotes, and even some unconventional experiences. Amid this vibrant scene, two themes stood out: the surge of breakthroughs in artificial intelligence, and the enduring importance of strong software architecture. In the following sections, I’ll explore selected keynotes that highlight the transformative potential of AI and the critical role of robust architecture in creating maintainable, adaptable systems.

AI Innovation: Agents, MCP, Responsible AI, and LLMOps

GitHub CEO’s Vision of AI-Powered Developer Agents

The Congress kicked off with an inspiring keynote from GitHub CEO Thomas Dohmke, who unveiled a striking vision for AI-powered developer agents. These agents are no longer just autocomplete helpers; they’re becoming autonomous teammates capable of analyzing issues, writing code, debugging, testing, and managing pull requests.

Developers can tap into these agents in two key ways:

  • Synchronous collaboration: Agents work directly with humans, supporting tasks in real-time.
  • Asynchronous operation: Agents independently tackle tasks in the background.

As Dohmke put it, „It’s not that AI replaces you. It makes you more powerful, faster, and frees you from the boring stuff.“ He painted a future of hybrid collaboration, where developers orchestrate teams of agents across everything from coding to security testing. Remarkably, he predicts in future agents could generate up to 90% with developers taking on guiding and reviewing roles.

Demo Snapshot:

  • An asynchronous agent migrated tests and opened a pull request without human intervention.
  • A synchronous agent guided the live development of a Snake game—suggesting improvements, debugging, and adding features on the spot.
  • Occasional demo stumbles proved that human expertise and guidance still matter.

Key Lessons:

  • AI agents are not a replacement for creativity—they help boost productivity and sharpen focus.
  • Developers become orchestrators, leaning on AI for agility and problem-solving.

The Model Context Protocol (MCP): A Foundation for AI Agents

Moving beyond automation, the Model Context Protocol (MCP) emerged as a vital backbone for context-aware agent ecosystems. MCP is like the “USB-C for AI”: a standardized connector enabling agents to plan, reason, access shared memory, invoke tools, and coordinate their efforts. Many keynotes and hands-on sessions highlighted the critical role of MCP, emphasizing its importance in building seamless and efficient AI agent interactions.

MCP in Action:

  • MCP clients embedded within agents send structured requests.
  • MCP servers expose resources, APIs, and memory securely.
  • Public registries (like mcp.so and OpenTools) set standards for discovering and accessing tools.
  • Toolkits such as ADK, LangGraph, and CrewAI simplify MCP adoption.

Key Lessons:

  • MCP paves the way for interoperable, scalable agent ecosystems.
  • Standardization accelerates how teams develop new agent-based applications.

Responsible AI: Building Trust by Design

Trust was at the heart of the „Trust by Design“ panel, where leaders from government, finance, and OpenAI Germany explored ethics in AI. Nicholas Hadim of OpenAI presented a practical Responsible AI framework:

  • Teach: Use high-quality, unbiased data for training.
  • Test: Rigorously benchmark and stress-test with red teaming.
  • Share: Gather structured user feedback and act on it.

Panelists emphasized that transparency in AI isn’t just about how systems work, but why they’re being used. Safeguards like the EU AI Act and regulated open-source infrastructure ensure AI remains centered on people.

Key Lessons:

  • Intentional design, transparency, and ongoing feedback form the bedrock of responsible AI.
  • Strong safeguards and broad stakeholder engagement are critical as AI use expands.

NVIDIA’s Blueprint for Fine-Tuning Large Language Models

Large language models (LLMs) such as GPT and LLaMA have become foundational in enterprise AI, making LLMOps—managing, fine-tuning, and deploying LLMs—essential.

What is LLMOps and Why Do We Need It?

  • LLMOps manages the complete lifecycle: data preparation, fine-tuning, evaluation, and deployment.
  • Unlike traditional MLOps, LLMOps meets the demands of LLMs’ scale and complexity.
  • It gives teams control, auditability, and automation, ensuring reliable, targeted model improvements.

A Glimpse at NVIDIA’s LLMOps Pipeline:

  • The pipeline integrates ArgoCD for environment sync, Argo Workflows for automating stages, and NeMo/NIM for adapting and deploying models.
    • ArgoCD: Aligns infrastructure and models with approved Git versions.
    • Argo Workflows: Automates steps like data loading, base model setup, tuning, evaluation, and rollout.
    • NVIDIA NeMo & NIM: Drive targeted fine-tuning and reliable model serving

Why Fine-Tuning Matters

  • Fine-tuning customizes models for real business needs, continually improving their performance and capturing domain expertise.
  • Each improvement is benchmarked, reproducible, and carefully tracked following best practices in LLMOps.

Key Lessons:

  • Fine-tuned LLMs deliver business value for complex, real-world use cases.
  • Automated, traceable processes accelerate innovation and help maintain high quality.

For a deeper dive, check out the Nvidia Blog article: Fine-Tuning LLMOps for Rapid Model Evaluation and Ongoing Optimization.

Software Architecture: Principles, Modernization, and Case Studies

While AI breakthroughs took center stage, other keynote talks offered everlasting insight on building resilient, maintainable systems. These sessions reminded us that while technologies change, robust architecture remains the backbone of reliable software.

Fundamentals of Modern Software Architecture

David Tilke revisited what software quality really means—it’s much more than just shipping features. Quality is about maintainability, testability, and adaptability as systems evolve. Three essential layered perspectives shape strong architecture:

  • System Architecture: Organize solutions into clear, cohesive services or domains.
  • Software Architecture: Define the internal workings of each service for flexibility and resilience.
  • Software Design: Write code with long-term maintainability in mind.

Separating these layers helps teams stay agile and makes projects easier to adapt and improve. Tilke’s “20-second rule” was a memorable takeaway: if a function or class can’t be found in under 20 seconds, it needs reorganizing.

Key Lessons:

  • Aligning modular design with the business structure makes systems easier to change and manage.
  • Applying proven patterns like the Single Responsibility Principle and Dependency Inversion ensures scalable, maintainable code.
  • Naming domains clearly aids team alignment and code readability.
  • True collaboration between architects and developers is the foundation of good architecture.

Breaking the Monolith: Lessons from a Real-World Migration

Babette Rhemrev (Axel Springer) shared a candid story of overcoming legacy system roadblocks. Incremental refactoring couldn’t break through tightly knit dependencies, so her team embraced a bold, data-first migration approach:

  • Built a clean, minimalist cloud-native data store in parallel.
  • Reimagined components using a lean domain model.
  • Used AWS Lambda and SQS for event-driven microservices.

Running old and new systems side by side meant zero downtime as the migration progressed.

Key Takeaways:

  • Sometimes starting fresh is more effective than endless small refactors.
  • Modular microservices simplify onboarding, scaling, and securing systems.
  • Managed, cloud-native services free teams to deliver value faster.

Building Systems That Last: Insights from Amazon’s Journey

Dr. Werner Vogels, Amazon’s CTO, offered a concise look at Amazon’s architectural evolution. Starting with a monolith, Amazon broke its systems into service-oriented and later microservices architectures, enabling flexibility and sustainable growth. Central was the principle of evolvability: designing modular, decoupled systems like S3 that can adapt and scale over time. Amazon adopted cell-based designs to isolate potential failures and ensure high availability.

Vogels also stressed a culture of cost awareness and sustainability. Developers use real-time cost metrics to optimize usage—turning off unused instances in the cloud or switching to efficient languages like Rust—to reduce waste and environmental impact. This mindset ties technical excellence to responsible business and environmental practices.

Key Lessons:

  • Embrace evolvability and modularity from the start.
  • Automate infrastructure and focus on innovation.
  • Build for reliability by isolating and containing failures.
  • Make cost and sustainability visible and actionable throughout the development process.
  • Encourage continuous learning and improvement within your teams.

Conclusion

The event was an exciting whirlwind for me, with 11 stages, each covering a unique topic. Beforehand, I used the WeAreDevs App to create my own agenda, which helped a lot with planning. I loved the variety, but with only 30 minutes per session, it often felt like just when things got interesting, the time was up. Some of those talks deserved at least 40 minutes! The constant rush between sessions made it a bit hectic, especially since the venue was spread across three separate buildings. With only 10-minute breaks, it was a bit of a challenge, but totally worth it for the experience.

Ultimately the WeAreDevelopers World Congress 2025 offered a bold and inspiring glimpse into the future of the software industry, blending groundbreaking AI advancements with timeless architectural principles. This event brought together experts and enthusiasts from across the globe to explore a diverse range of software development topics, including leadership, AI, machine learning and DevOps. Conferences like this go beyond keeping us informed about trends—they inspire reflection, drive adaptation, and foster the creation of resilient systems designed to tackle current challenges while anticipating future needs. It’s a powerful reminder of the importance of continuous learning and community in our rapidly evolving field.