BSI and the Korean Agency for Technology and Standards (#KATS) will be co-hosting AI for All: Standardisation and Industrialisation on the 2nd of May. The free online event will explore the importance of #standards in providing interoperability and security for #AI technologies, and how it can drive the rollout of AI across a variety of different sectors. Over the two-hour session, experts will offer valuable insights on the current landscape of international AI standardization efforts, the role of standards in responsible and safe uses of AI, and the pathways for widespread adoption. Secure your spot now: https://bit.ly/3U83TnM #ArtificialIntelligence #Technology #Innovation Adam Leon Smith FBCS Nikita Bhangu Dr. Eric Seungman Kim Jong-Won Kwon Jennifer Durrant Nuala Polo
BSI Digital trust’s Post
More Relevant Posts
-
📣 Super pleased to announce that, on 18 March at 15:00 UK, Avi Gesser and I will be presenting on “The Rise in AI Regulation: What to Expect & How to Prepare” at the #AIUK2024 Fringe, in association with The Alan Turing Institute. We’ll be covering: 💡 The AI concerns that are currently top-of-mind for regulators globally; 🌏 What the regulatory landscape looks like in the short and long term, including the unique challenges of developing a regulatory compliance strategy across borders, and in the face of potentially divergent regimes; and ✅ Emerging hallmarks of responsible AI governance and compliance, and practices to consider adopting now. We hope you can join us. Sign up here: https://lnkd.in/eGx8h-gT #AIUK
To view or add a comment, sign in
-
COO | Driving Growth in B2B Tech SMEs and Start-Ups | Expert in Strategy, Execution, People and Change, Operational Transformation, P&L Management
AI models are powerful tools, but their propensity to "hallucinate" Ñ generating false or misleading information Ñ poses significant challenges. Research is underway to devise methods that ensure these AI systems can be more reliable and accurate, essential for their adoption in critical sectors. Addressing AI hallucinations will safeguard the technology's credibility and utility. #AIResponsibility #TechInnovation #FlexCOO https://econ.st/3w92C7R
To view or add a comment, sign in
-
A warning about LLM and AI from German government: https://lnkd.in/gMDaGDij Yes. Regulation and validation/verification (V&V) are critical. For example, nuclear tests were to V&V atomic bomb models. An UN newly released report said that AI ONLY benefits small amount of states, companies and individuals, i.e., some few humans are making big money by using AI to harm many people. AI is based on math models, and models must be V&V before we can trust them. Using math models to simulate a physical process started from the Manhattan Project in WWII for nuclear bomb design. Then, in 1960s, it came the C4ISR, which is today's AI, whose original missions were just breaching enemy's security, dis-/mis-information, cognitive manipulation, cheat, surveillance, detection for kill, etc.. In our reality, AI causes expensive electricity bills to a C-level friend of ours. 😔 😥 😿 AI can do many things, but NOT everything, at least NOT what we are doing, by using our intellectual property (IP), a copyrighted multilingual metadata. Without metadata, NO data can be found/retrieved, even by AI. https://lnkd.in/g-aJFnXR
As standards and regulation begin to provide the much needed guardrails to support AI to be a force for good, how can we ensure we foster innovation to drive benefit for all? Today, BSI’s Craig Civil partners with Daniel Saunders from L Marks to explore the opportunities of AI innovation at EmTech London. Join the discussion at 1pm. https://lnkd.in/eQst7RUh #Innovation #AI #EmTech #Standards #Regulation
To view or add a comment, sign in
-
A warning about LLM and AI from German government: https://lnkd.in/gMDaGDij Yes. Regulation and validation/verification (V&V) are critical. For example, nuclear tests were to V&V atomic bomb models. An UN newly released report said that AI ONLY benefits small amount of states, companies and individuals, i.e., some few humans are making big money by using AI to harm many people. AI is based on math models, and models must be V&V before we can trust them. Using math models to simulate a physical process started from the Manhattan Project in WWII for nuclear bomb design. Then, in 1960s, it came the C4ISR, which is today's AI, whose original missions were just breaching enemy's security, dis-/mis-information, cognitive manipulation, cheat, surveillance, detection for kill, etc.. In our reality, AI causes expensive electricity bills to a C-level friend of ours. 😔 😥 😿 AI can do many things, but NOT everything, at least NOT what we are doing, by using our intellectual property (IP), a copyrighted multilingual metadata. Without metadata, NO data can be found/retrieved, even by AI. https://lnkd.in/g-aJFnXR
As standards and regulation begin to provide the much needed guardrails to support AI to be a force for good, how can we ensure we foster innovation to drive benefit for all? Today, BSI’s Craig Civil partners with Daniel Saunders from L Marks to explore the opportunities of AI innovation at EmTech London. Join the discussion at 1pm. https://lnkd.in/eQst7RUh #Innovation #AI #EmTech #Standards #Regulation
To view or add a comment, sign in
-
AI models are powerful tools, but their propensity to "hallucinate" Ñ generating false or misleading information Ñ poses significant challenges. Research is underway to devise methods that ensure these AI systems can be more reliable and accurate, essential for their adoption in critical sectors. Addressing AI hallucinations will safeguard the technology's credibility and utility. #AIResponsibility #TechInnovation #FlexCOO https://econ.st/3w92C7R
To view or add a comment, sign in
-
As standards and regulation begin to provide the much needed guardrails to support AI to be a force for good, how can we ensure we foster innovation to drive benefit for all? Today, BSI’s Craig Civil partners with Daniel Saunders from L Marks to explore the opportunities of AI innovation at EmTech London. Join the discussion at 1pm. https://lnkd.in/eQst7RUh #Innovation #AI #EmTech #Standards #Regulation
To view or add a comment, sign in
-
As AI continues to advance, the question remains: How do we balance human safety and #ArtificialIntelligence innovation? Join the dialogue in this issue of #Transform now: https://lnkd.in/gmp6FNxW #Huawei #ThisIsHuawei #BetterTogether
To view or add a comment, sign in
-
To no surprise, the integration of Artificial Intelligence is reshaping regulatory compliance 🤖 This integration offers the potential for significant benefits in terms of insights, cost-efficiency, and reduced burdens. However, this integration should be approached carefully, considering the complexities and implications it brings to traditional regulatory frameworks. Read more on the need for a holistic and sustainable approach that integrates AI, regulatory technology (regtech), and human expertise 👉 https://hubs.la/Q020MlTD0 At RegASK, we’re at the forefront of AI-powered regulatory support, so contact us now: https://hubs.la/Q020Mr8R0 #AI #RegulatoryCompliance #RegTech
To view or add a comment, sign in
-
We believe in a collective approach centering people in each step of the AI system. This will not only prevent the potential harm these tools could inflict but also create the conditions under which this promising new technology can enable human flourishing. But how do we accomplish this? - Ensure that people impacted by AI have agency in shaping the technology - Put the burden of proof on developers, vendors, and deployers to demonstrate that their tools do not create harm—and give regulators and citizens the tools to hold them accountable - Address power and information asymmetries Check out our AI policy principles to learn more—and sign onto our Agenda for Responsible AI: https://lnkd.in/gYPNWFEg #AI #AIPolicy
To view or add a comment, sign in
8,492 followers
Standards Researcher | BCS F-TAG | ForHumanity
2wSteve English