4 Comentários
Avatar de User
Avatar de Rainbow Roxy

Brilliant. What a great overview of AI's journey and potential. I'm especially curious about how you plan to delve into the accountability aspects of AI decision-making when discussing ethcis?

Expandir comentário completo
Avatar de Guilherme Favaron

Thanks for the comment! In my view, the great challenge of AI ethics is transforming accountability into something concrete, moving it from discourse into daily practice. I've learned over the years that this requires three integrated pillars.

First, real governance and traceability: having clarity on who decides, who approves, and who's responsible at each stage (using RACI, for example), with active risk committees, complete audit trails — data, models, prompts — and rigorous documentation at every release.

Second, applied explainability and quality control: using techniques like SHAP or LIME where they truly add value, running robustness and bias tests before production, tracking fairness metrics by demographic group, and continuously monitoring drift.

Third, giving users a voice and maintaining human control: creating effective contestation channels, ensuring qualified human review when needed, having rapid rollback capability, and managing incidents with full transparency. And measuring all of this with clear KPIs — appeal rate, resolution time, incident rate, variation in fairness metrics.

Best regards, Guilherme

Expandir comentário completo
Avatar de Daniel Popescu / ⧉ Pluralisk

Excellent guide! How do we scale compute sustainably?

Expandir comentário completo
Avatar de Guilherme Favaron

Thanks for the comment! In my view, what needs to be considered to scale compute sustainably is treating sustainability as an architectural requirement, with explicit goals and trade-offs, not as an afterthought. Over time, I've come to see this as a multi-layered challenge.

It starts with algorithmic efficiency: choosing smaller, hardware-aware models (sparsity, quantization, distillation), optimizing batch and sequence length, and using mixed precision to reduce FLOPs without sacrificing quality.

Then there's location and clean energy: training where renewable energy is most available, securing long-term energy contracts, and prioritizing data centers with low PUE and efficient cooling systems.

We also need conscious orchestration: autoscaling with defined limits, load shifting to lower-emission hours (carbon-aware scheduling), and "green SLAs" that allow flexible windows when possible.

Hardware choices matter too: latest-generation GPUs/TPUs with better performance per watt, reusing checkpoints and fine-tuning instead of training from scratch, and leveraging serverless inference for peaks.

Most importantly, we must measure and be accountable: tracking emissions per experiment and per request (grCO2e), publishing model cards with energy costs, and setting reduction targets per quality unit (e.g., CO2e per point gained in metrics).

Finally, responsible offsetting: reduce first, then offset through audited projects, and tie compute budgets to sustainability indicators.

Expandir comentário completo