Technology NewsTechnology NewsTechnology News
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Reading: AI Leaders Call for Clear Rules to Tackle Trust Crisis
Share
Font ResizerAa
Technology NewsTechnology News
Font ResizerAa
Search
  • Computing
  • AI
  • Robotics
  • Cybersecurity
  • Electric Vehicle
  • Wearables
  • Gaming
  • Space
Follow US
  • Cookie Policy (EU)
  • Contact
  • About
© 2025 NEWSLINKER - Powered by LK SOFTWARE
AI

AI Leaders Call for Clear Rules to Tackle Trust Crisis

Highlights

  • AI experts urge combining regulation and industry action to protect public trust.

  • Operational ethics frameworks are needed beyond traditional principles and statements.

  • Embedding values into AI design can help address emerging societal risks.

Samantha Reed
Last updated: 8 August, 2025 - 7:19 pm 7:19 pm
Samantha Reed 2 hours ago
Share
SHARE

As organizations accelerate the deployment of artificial intelligence in sectors like healthcare, finance, and justice, concerns continue to emerge over the pitfalls of neglecting ethical safeguards. The increasing reliance on automated decisions has sparked discussions about where accountability lies and what measures must be in place to preserve public trust. Public and private sectors now face mounting pressure to craft solutions that go beyond written ethics statements, turning abstract commitments into enforceable, transparent practices. The conversation is gradually shifting from “if” to “how” ethical frameworks can be embedded where AI impacts human lives most directly. Industry-watchers note that the tension between rapid innovation and ethical responsibility remains unresolved, with ongoing debate about the balance of power between regulators and technology vendors.

Contents
Why Are Ethical Structures Seen as Essential for AI?How Does the Foundation Propose Managing AI Accountability?What Role Should Government and Industry Play in Regulation?

Industry responses to AI regulation have historically varied, often emphasizing the self-regulatory capacity of tech companies or calling for government intervention in cases of misuse. Earlier discussions tended to separate regulatory action from innovation, sometimes warning of slowed progress if external rules were too stringent. More recent perspectives, however, suggest a multi-stakeholder approach, highlighting the necessity of collaboration and mutual accountability. This represents a noticeable shift from earlier narratives that viewed ethics as primarily a voluntary industry concern to recognizing legal structures as a necessary foundation for safe AI deployment.

Why Are Ethical Structures Seen as Essential for AI?

Suvianna Grecu, founder of the AI for Change Foundation, emphasizes that pressing ethical concerns with artificial intelligence arise not from the technology itself, but from insufficient frameworks guiding its implementation. She argues that unchecked deployment leads to large-scale, automated errors with real-world consequences. With AI systems now influencing critical outcomes in employment, credit assessment, and the criminal justice system, many remain untested for embedded biases or their broader societal impact. According to Grecu, “For too many, AI ethics is still a policy on paper — but real accountability only happens when someone owns the results.”

How Does the Foundation Propose Managing AI Accountability?

Grecu’s organization advocates for shifting from general principles to specific, operational practices by integrating ethics into daily workflows. Practical tools, such as checklists and pre-deployment risk evaluations, are recommended to track and mitigate risks before AI systems are widely adopted. Additionally, she advocates for cross-disciplinary review boards merging legal, technical, and policy perspectives to ensure comprehensive oversight. Clear process ownership at each development phase and transparent documentation of decisions are identified as crucial steps toward reliable governance.

What Role Should Government and Industry Play in Regulation?

Grecu makes it clear that ensuring AI’s responsible use cannot be relegated to one sector alone. She advises that governments should establish clear legal minima and standards, especially in contexts affecting fundamental rights, while companies take up the responsibility for technical advancements and improvement of auditing tools. “It’s not either-or, it has to be both,” she says, proposing industry-regulator collaboration to avoid both stagnation and unchecked risk. Grecu adds, “Collaboration is the only sustainable route forward.”

Broader discussions now turn to the intrinsic values embedded in these technologies. Grecu highlights emerging issues, such as AI systems’ potential to manipulate emotions, which threaten personal autonomy and social trust if left unaddressed. She points out that artificial intelligence systems reflect both the data and objectives they are given:

“AI won’t be driven by values, unless we intentionally build them in.”

This reflects a growing awareness that without deliberate design choices, AI will optimize for efficiency, not societal values like justice or inclusion.

European policymakers and stakeholders, according to Grecu, have a unique opportunity to prioritize human rights, transparency, and inclusivity throughout digital policy and product development. She argues for embedding these values at every stage to ensure that AI serves humans, not just markets. As initiatives like the AI & Big Data Expo Europe increase visibility and promote dialogue, coalitions may help solidify a value-driven approach to AI governance.

Enduring questions remain about how best to balance rapid AI advancement with meaningful oversight. Relying solely on voluntary industry standards risks neglecting individual rights and undercuts public confidence, while heavy-handed rules might impede innovative growth. Establishing cross-sectoral mechanisms for accountability, including purposeful design and stakeholder collaboration, appears to be gaining favor. Stakeholders may benefit by considering practical ethics as routine as quality assurance, ensuring robust technology that also respects societal values. For organizations looking to deploy artificial intelligence, integrating multidisciplinary assessments and maintaining ongoing public engagement may help build the trust that makes safe, widespread AI adoption possible.

You can follow us on Youtube, Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon and Bluesky

You Might Also Like

ShengShu Trains Robots Using Simulated Worlds Through Vidar Model

Experts Advocate Human-Centred Approach for Future AI Systems

FORT Robotics Secures $18.9M to Advance Robotic Safety Systems

PrismaX Launches Teleop Platform to Broaden Robotic Arm Use

Experts Warn AI Hype Weakens Crucial Human Skills

Share This Article
Facebook Twitter Copy Link Print
Samantha Reed
By Samantha Reed
Samantha Reed is a 40-year-old, New York-based technology and popular science editor with a degree in journalism. After beginning her career at various media outlets, her passion and area of expertise led her to a significant position at Newslinker. Specializing in tracking the latest developments in the world of technology and science, Samantha excels at presenting complex subjects in a clear and understandable manner to her readers. Through her work at Newslinker, she enlightens a knowledge-thirsty audience, highlighting the role of technology and science in our lives.
Previous Article Tesla Prepares Major FSD Update, Targets Driver Attention Reduction
Next Article Microsoft Stresses Fast Recovery Depends on Cyberattack Preparation

Stay Connected

6.2kLike
8kFollow
2.3kSubscribe
1.7kFollow

Latest News

Tesla Equips Model 3 with Front Camera in Latest China Filing
Electric Vehicle
Microsoft Stresses Fast Recovery Depends on Cyberattack Preparation
Cybersecurity
Tesla Prepares Major FSD Update, Targets Driver Attention Reduction
Electric Vehicle
Unitree Launches A2 Quadruped Robot Boasting Dual Lidar Sensors
Robotics
Tesla Extends Model Y and Model 3 Ranges in China Launch
Electric Vehicle
NEWSLINKER – your premier source for the latest updates in ai, robotics, electric vehicle, gaming, and technology. We are dedicated to bringing you the most accurate, timely, and engaging content from across these dynamic industries. Join us on our journey of discovery and stay informed in this ever-evolving digital age.

ARTIFICAL INTELLIGENCE

  • Can Artificial Intelligence Achieve Consciousness?
  • What is Artificial Intelligence (AI)?
  • How does Artificial Intelligence Work?
  • Will AI Take Over the World?
  • What Is OpenAI?
  • What is Artifical General Intelligence?

ELECTRIC VEHICLE

  • What is Electric Vehicle in Simple Words?
  • How do Electric Cars Work?
  • What is the Advantage and Disadvantage of Electric Cars?
  • Is Electric Car the Future?

RESEARCH

  • Robotics Market Research & Report
  • Everything you need to know about IoT
  • What Is Wearable Technology?
  • What is FANUC Robotics?
  • What is Anthropic AI?
Technology NewsTechnology News
Follow US
About Us   -  Cookie Policy   -   Contact

© 2025 NEWSLINKER. Powered by LK SOFTWARE
Welcome Back!

Sign in to your account

Register Lost your password?