What a wild time to be alive — especially with Generative AI reshaping everything, including how we build software. It's exciting and unsettling in equal measure.

A few weeks ago, after many attempts, I gave in to my curiosity and dove deep. I haven't been able to stop since. I can confirm that AI is giving me more hours of work, not fewer — this topic has me completely absorbed. So today I'm pausing to clear my head and share what I've been learning. And what better way to do that than writing about the journey?

⚠️ Important
This article was edited with AI assistance, but was written entirely and originally by me, in Spanish. It was translated into English for broader reach. The editing process was intentional: I deliberately avoided giving the AI full context, seeking better understanding — not authorship.

What an exciting era, and what a privilege to be part of this technological and cultural shift that is marking a new era in human history. The way we manage information, learn, communicate, and do business is evolving at an unprecedented pace — pushing us to break limits and redefine standards and paradigms we once thought were fixed.

In this article I want to share how I've been taking on this challenge, what principles have emerged, and what comes next in my journey.

Impact on People

AI impact on people

There is a shift in expectations we must embrace to enable a better evolution and reduce social disruption as we adopt AI. We need to accept the fact that in technology, the bar has been raised: knowledge alone is no longer the primary asset. It can be assumed that anyone has it — and if they don't, PhD-level LLM models are readily available to fill that gap. What must set us apart, then, is excellence, contextualized knowledge applied to the business, empathy toward people, the ability to navigate trade-offs like security vs. speed or quality vs. speed, and the capacity to deliver more in the same or less time.

My biggest piece of advice: we need to change our mindset. Today, attitude, empathy, and resilience matter more than ever. Specifically, we should:

  • Overcome the fear of change: get uncomfortable, stop resisting.
  • Unlearn fast and learn better with AI.
  • Stop being afraid of failure. Fail — but fast and in a controlled way.
  • The value you deliver can no longer be purely technical; you must learn the business, no excuses.
  • Get used to having your output measured constantly.
  • Challenge the status quo.
  • Cultivate enormous curiosity for redesigning your workflows and letting AI help you.

The most meaningful changes start from within. Accept that you need to change, and start your journey today.

Principles

When I asked myself how to adopt Generative AI responsibly, I defined a set of principles to govern that change.

1. No Human Replacement

💡 Principle
Artificial intelligence must not replace humans — it must elevate them and multiply their capabilities.

We must understand that, as humans, we cannot and should not blindly trust everything AI generates. Its true value is not in replacing our judgment, but in acting as a capability accelerator — a powerful tool to grow, learn, analyze faster, and execute better.

However, the speed AI provides does not eliminate the need for human judgment; on the contrary, it makes it even more critical. We still need people who can interpret context, question results, identify risks, recognize ethical nuances, and make responsible decisions.

AI can propose, summarize, infer, and suggest paths — but the responsibility for the final decision must not rest solely with the model. It must remain with the people and organizations using it.

This is why, more than accepting automatic outputs, we must demand traceability in the reasoning: understand what information was used to build a recommendation, what assumptions sustain it, what its limitations are, and how much confidence it deserves. It's not just about using AI — it's about using it with oversight, judgment, and the capacity to validate.

2. Human in the Decision Loop

The human retains final authority over critical decisions — accepting, rejecting, or requesting revisions on AI proposals.

BEFORE: Developer = Coder
NOW:    Developer = AI Governor

Responsibilities:
├── Define vision and priorities
├── Approve/reject agent proposals
├── Validate quality at quality gates
├── Provide business context the AI lacks
└── Strategic architectural decisions

HUG AI Principles (Human-Governed AI)

HUG AI is an open-source methodology that provides a structured framework for designing, developing, and maintaining AI-driven solutions, with the human as the central axis of the process. Its premise is simple: don't replace the developer — multiply their capabilities.

  1. Human-AI Collaboration — Not replacement, multiplication
  2. Quality over speed — Prioritize security and maintainability
  3. Context-Aware — Agents need context to function
  4. Continuous learning — Agents improve with human feedback

Approval Gates

We must define at which points humans intervene and what approvals we require. These approvals must be fully documented and stored for audit processes, investigations, and quality assurance.

[Spec] → ✅ Architect approves → [Design] → ✅ CTO approves (Disruptive change) →
[Code] → ✅ Lead approves → [Tests] → ✅ QA approves →
[Security] → ✅ SecOps approves → ✅ CISO authorizes → [Deploy]

3. End-to-End Traceability

Every AI-supported process must have complete traceability. This means structured recording of the prompts used, sources consulted, model version, generated outputs, human interventions, and the final decision made.

Without traceability, there is no real governance over AI use.

4. Everything as Code

To operate agentic systems with real control, we must treat everything that defines their behavior as code: infrastructure, configurations, prompts, rules, policies, flows, and approval criteria. This allows us to version, review, test, audit, and deploy changes safely and repeatably.

5. Operational Reversibility

Every automated output must be correctable or reversible in a controlled way, with an assigned owner, clear activation criteria, and complete evidence of the intervention performed. Without reversibility, there is no real operational control over agentic or AI systems.

6. Equivalent or Superior Quality

Every AI-assisted delivery must meet, at minimum, the same review, testing, security, and quality standards required of any non-assisted delivery. Using AI does not reduce controls or relieve responsibility.

7. Privacy and Security by Design

Every AI-supported solution must incorporate privacy and security from the design phase. Security and privacy must not be added later — they must be structural parts of the solution from the start.

8. Transparency Toward Users and Affected Parties

When a person interacts with AI, or when a result has been generated or significantly influenced by it, this must be communicated clearly and understandably. Transparency is not optional: it is a basic condition for protecting trust.

9. Context Before Automation

No process should be automated without sufficient and verified context. Every automation must start from clear documentation, reliable data, explicit rules, and a solid understanding of the business objective.

Automating without context doesn't accelerate value — it accelerates error.

10. Continuous Improvement Based on Evidence

We must systematically measure the quality, risks, and performance of AI-assisted systems and processes, collect feedback from users and teams, and incorporate that evidence into formal improvement cycles. No AI solution should ever be considered "finished" or "correct by default".

In other words: deploying is not enough. You must observe, measure, learn, and adjust.

Next Steps

The principles I described in this article are not a wish list — they are the rules I commit to upholding in every project. The road is far from over; what comes next is making them concrete.

  1. Measure maturity. Before adopting any AI tool or workflow, I need to understand where I stand today: which processes are documented, which have defined quality criteria, and which still rely on tacit knowledge. Without that baseline, any automation is built on sand.
  2. Build a prioritized roadmap. With maturity measured, the next step is identifying the highest-friction, highest-potential points. The roadmap is not a plan of tools — it's a plan of business impact.
  3. Early wins. Choose one or two concrete use cases, execute them with the principles applied, measure results, and share learnings. Early wins are the evidence that builds organizational confidence to keep moving forward.

This journey is just beginning. If you're going through something similar, I'd love to hear how you're experiencing it.