Follow on Google News News By Tag Industry News News By Place Country(s) Industry News
Follow on Google News | Cogito Tech and Fairly AI are joining forces to advance Responsible AIBy Matthew McMullen, the Senior Vice President and Head of Corporate Development at Cogito Tech, and Hassan Patel, the Director of Global AI Policy Compliance Engineering at Fairly AI and a practicing attorney specializing in technology law.
By: Cogito Tech for Generative AI Solution Several actions within the AI Executive Order impact the AI data and model development space: Developers of foundation models must disclose safety test outcomes and essential data to the US government, emphasizing transparency in the model's training data and biases. The Department of Commerce is tasked to develop guidelines for content authentication and watermarking, ensuring that AI-generated content is clearly labeled and curated. The order calls for expanding privacy protections, emphasizing the need for transparent and ethical data collection and usage, a vital aspect of AI training. Deep Rastogi, Product Manager at Business Wire, emphasizes, "For product managers, transparency in AI decision-making is not only a moral duty, but also a strategic advantage. By being transparent about how and why our AI systems make decisions, we can build trust with our customers, partners, and regulators, and demonstrate our commitment to ethical and responsible AI. Transparency not only enables us to learn from feedback; it also enables us to anticipate whether actual results will align with the desired outcomes of our tools. Transparency is not a trade-off, but a win-win situation for everyone." At Cogito Tech, we recognize the significance of quality, unbiased, and ethical data for training responsible AI. Our DataSum certification offers regulators and developers alike an insight into the intricate mesh of the model training phase. The DataSum label allows stakeholders to see embedded best practices, adhere to a fair use policy, and guarantee traceability of the data throughout the AI model's development. For instance, the order's stipulation requiring companies to notify the federal government during the training phase of potentially threatening foundation models prompts several questions. Who establishes what constitutes a "serious risk"? How is it defined? Will there be public access to safety test outcomes? Furthermore, as AI technologies advance, what are the best practices for ensuring ethical integrity in datasets? https://www.fairly.ai/ End
|
|