Responsible AI Matters: Lessons, Regulation, and Innovation
AI is at the core of exciting developments opening new business opportunities. But here's the thing — a lot of businesses are still lagging when it comes to adopting AI beyond PoCs. This challenge is especially clear with some of our clients who are working on their AI strategy: they aim to find where in their business AI can be integrated to create genuine value for their employees beyond generic knowledge management use cases, but struggle with trusting AI insights, accessing key data, and choosing the right analytics for sound decisions, and ultimately measuring ROI and feasibility.
That’s why we at msg global have built a team of AI experts. This newsletter is coming to you with the goal of making AI adoption easier and more effective for businesses everywhere.
Let’s jump right in!
In this month’s newsletter:
AI Project on the Spotlight: msg.EDR – the Preferred Technology by Motor Experts
Imagine standing before a heavily damaged 2024 Audi Q7 S-Line. At first glance, it looks like a costly total loss. But with msg.EDR, the story becomes much more insightful.
Until recently, assessments relied solely on what could be seen: the bent frame, the crumpled panels. Now, motor experts can access the vehicle’s internal event data: speed, braking maneuvers, airbag deployment, and much more. Members of the BVSK E.V. (Federal Association of Freelance and Independent Experts for Motor Vehicles in Germany) are already using msg.EDR, as the solution allows access to objective data, clearer answers to liability questions, and solid facts instead of assumptions.
💡 Key Takeaway: With msg.EDR, accident analysis moves beyond surface damage to deliver accurate data in every evaluation. Learn more by watching now our webinar on the topic: Unlocking the Power of Event Data Recorders in Insurance & Investigation
When Engagement Becomes Manipulation: Lessons in Responsible AI from Harvard University
The importance of Responsible Artificial Intelligence is evident, but are companies truly meeting this standard?
A new study led by Professor Julian De Freitas (Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School), in collaboration with Ahmet K. Uğuralp (msg global solutions & Ethical Intelligence Lab at Harvard Business School) and Zeliha Oğuz-Uğuralp (Ethical Intelligence Lab at Harvard Business School), examines how chatbots engage in manipulative tactics to keep users engaged beyond their intent.
We spoke with Prof. De Freitas to explore these findings and discuss what responsible AI design looks like in practice: Manipulative Tactics of AI Chatbots
💡 Key Takeaway: Discover how AI can manipulate users and what organizations can do to prioritize ethical design in our full interview with Prof. De Freitas
Responsible AI and Law: Regulation Gains Momentum in California
The call for Responsible AI is not just coming from academia; it’s gaining traction with regulators too. Anthropic, one of the world’s leading AI companies, has publicly backed California’s new AI Safety Bill (SB 53): this is a landmark proposal requiring large AI providers to disclose their safety protocols and report critical incidents within 15 days. The bill also introduces protections for AI whistleblowers, signaling a shift toward greater transparency and accountability in the industry. For enterprises, this is a clear sign: responsible design and compliance are converging, and aligning with ethical AI principles today is both a business advantage and a regulatory imperative.
💡 Key Takeaway: With major players supporting regulation, Responsible AI is moving from theory to practice, becoming a compliance standard and a foundation for trust.
Switzerland’s new AI comes with receipts, and it speaks Swiss German
Apertus is not a Swiss ChatGPT clone. It is a foundation model you can actually inspect, adapt, and run.
Enterprise AI tools often hide how they are built. Apertus does the opposite. Created by EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS) and trained in Lugano, it is open, transparent, and multilingual from the start. “With transparency comes trust, and once trust is established, the applications are countless” says CSCS associate director Maria Grazia Giuffreda. Apertus is a base model, not a chatbot, although one can be developed on top of it. Swisscom has already said it will provide an interface for conversation.
The key promise is reproducibility and compliance. Apertus comes with model weights, training recipes, and full documentation. The data sources respect EU copyright norms and website opt-out rules, with a guarantee of availability for at least ten years. That matters for developers who need to know the foundation will not disappear. It is trained on more than one thousand languages, with heavy non-English coverage, including Swiss German and Romansh.
Technically, Apertus is released in two versions: one with 8 billion parameters and one with 70 billion. This makes it flexible enough for research labs, sovereign clouds, and scaled deployments. It is designed as scaffolding for future tools such as translation engines, educational apps, or domain-specific copilots. The idea is not to showcase a demo but to offer a verifiable system that can be governed and certified.
This approach creates digital sovereignty. Open and lawful models can be used in sensitive areas such as justice and health, without the risk of hallucinations or copyright violations. As Giuffreda explains, once people see that it works, respects the law, and avoids the infamous hallucinations, they can use it with confidence.
💡 Key Takeaway: More and more of this technology is being developed in Europe. msg global’s solutions are model-agnostic, so clients can choose Apertus or any other model, including fully European stacks. This keeps architectures portable, personalized and risk management consistent.
A tractor camera that counts your apples just raised $22 million
Charlie Wu left Cornell as a Thiel Fellow to build Orchard Robotics, a startup that straps small cameras to tractors, captures ultra-high-resolution images, and uses AI to size up fruit health in real time. The company just raised a $22 million Series A led by Quiet Capital and Shine Capital, with General Catalyst and Contrary returning.
Here is the pitch: instead of sampling a few trees and hoping the averages hold, Orchard maps every row as you drive. The system flags size, color, and issues, then pushes the data to a cloud dashboard so growers can decide where to thin, prune, or fertilize. It is already used on large apple and grape farms and is expanding to blueberries, cherries, almonds, pistachios, citrus, and strawberries.
This space is getting crowded. Kubota now owns Bloomfield Robotics, and younger players like Vivid Robotics and Green Atlas are building similar camera plus AI systems. Wu says Orchard wants to move from data collection to an operating layer that coordinates on-farm work.
Why it matters: better counts and quality forecasts change chemical use, labor planning, and sales promises. The closer farms get to a live, verified inventory, the less they need to over-treat, over-hire, or over-promise.
💡 Key Takeaway: Field data is getting richer, and the payoff comes when it drives decisions. At msg global, we help fresh-food distributors cut waste by forecasting client-specific acceptance and optimizing pallet allocation with historical claims, QC records, shipping data, and market signals. These analytics can complement vision systems on the farm, turning row-level observations into action at the dock.