Curious how an audit of your AI applications can help improve your company’s performance while mitigating substantial risks? Our work with clients who are pioneering AI use to dominate their industry will illuminate Eticas’s expertise and abilities that are truly bar none.
Since 2012, the Eticas Foundation has worked with the world’s leading policy and government organizations, influencing both policy as well as auditing their complex systems to ensure accuracy, fairness, and outputs that benefit society at large.
Since 2016, the Laura Robot has analyzed more than 8.6 million visits in 40 clinical and hospital centers in several Brazilian states. Eticas examined the Laura application in its version 1.0, created in 2017.
The main objective of the Laura system is to provide early warning of a clinical deterioration susceptible to death, with the aim of reducing mortality and hospital service costs. It is an Artificial Intelligence system that provides a classification of the patient’s risk of clinical deterioration after analyzing the indicators of the patient’s last five vital sign collections.
Laura’s algorithmic audit was focused on exploring possible risks of algorithmic bias or discrimination.
Advances in AI, especially facial recognition, hold great promise, but the risks to diverse users need careful consideration. This audit aims to ensure empathetic and conscientious innovation, using AI to benefit everyone.
The Impact of Facial
Recognition on People
with Disabilities
Algorithmic auditing is an instrument for dynamic appraisal and inspection of AI systems. This guide focuses on adversarial or third-party algorithmic audits, where independent auditors or communities thoroughly examine the functioning and impact of an algorithmic system, when access to the system is restricted.
The European Union implemented far-reaching legislation to regulate the digital landscape: The Digital Services Act (DSA) aims to overhaul the rules for online services in the EU, which were last updated over two decades ago. Here’s what you need to know about the DSA and its potential impact.
What is the impact of social media on the representation and voice of migrants and refugees in Europe? What are the challenges and opportunities to avoid their invisibilization and promote a fair representation?
The video platform perpetuates a dehumanizing image of migrants, and its recommender system rewards xenophobic narratives, which feeds back into a context of a rising far right political discourse.
Along with Observatorio TAS and the Taxi Project, we unveil the hidden impacts of ride-hailing algorithms in Spain in the form of an adversarial audit, to identify potential harms to users, workers, and competitors in the platform economy.
An analysis of the EU’s investment in AI development reveals a significant mismatch between EU’s ambition in leading on responsible AI and allocation of its own funds to deliver on this objective.
An email instead of a letter, online shopping instead of driving to a mall, a video conference instead of an in-person meeting. Are these activities as green as we think, or do they hide an environmental footprint?
Spotting PRIVACY & ethical IMPLICATIONS: Phones with facial recognition, administrations storing our bodily traits, voice recordings to determine our access to jobs or benefits… Are biometrics a security miracle, or a threat to human rights?
This company raised 1M€ in funding 3 years after its foundation. This Netherlands-based software startup promises to give users visibility and a choice over the personal information Google has of them. It also gives the choice to sell the data for rewards to the company partners. But how safe is it?
Private companies offering direct-to-consumer (DTC) DNA services have increased their market significantly in recent years. It was estimated that by 2021 more than 100 million people would have provided their DNA to four leading commercial ancestry and health databases.
After months of work, we are thrilled to present the first annual report of our Observatory of Algorithms with Social Impact (OASI). We want to share its key findings, and invite you to take a look at it to get a deeper understanding of this sample of the algorithmic landscape.
VioGén is an algorithm that determines the level of risk faced by a victim of gender-based violence and establishes her protection measures in Spain. It is the largest risk assessment system in the world, with more than 3 million registered cases.
New York Times: A.I. Belongs to the Capitalists Now
“In a larger sense, what’s happening at OpenAI is a proxy for one of the biggest fights in the global economy today: how to control increasingly powerful A.I. tools, and whether large companies can be trusted to develop them responsibly.”
Bloomberg: Regulate AI? How US, EU and China Are Going About It
“Some US cities and states have already passed legislation limiting use of AI in areas such as police investigations and hiring, and the European Union has proposed a sweeping law that would put guardrails on the technology. While the US Congress works on legislation, President Joe Biden is directing government agencies to vet future AI products for potential national or economic security risks.”
Axios: VC firms working with D.C. to "self-regulate" AI startup investing
“A group of venture capital firms have pledged to ensure that the startups they invest in adhere to responsible AI. to correct for the ‘move fast and break things; a philosophy that drove the rise of social media technologies, without considering second or third-order effects.”