The First Grand Challenge:
More information

Find the Grand Challenge Rulebook containing the rules governing the competition on SSRN

The Fact Sheets we used for potential teams, partners, and A.I. Providers and the Grand Challenge Presentation (pdf via Dropbox)

The version of the EU's A.I. Act we used for the Grand Challenge (pdf via Dropbox)  

The many questions our Teams, A.I. Providers, media, etc. frequently asked (FAQs), May 2023

Below are the FAQs as per May 2023. Some answers changed once the definitive version of the Rulebook became available; in this case we have added a remark in parenthesis at the end of the answer.


Potential teams and team members

The video of the Proposers’ Day, including presentations of the speakers, is available on our Youtube channel.

The Grand Challenge prize of 100’000 CHF(approx. 100’000 €/$) will be given in cash via bank transfer, i.e., not in the form of a prize in kind such as a gold bar, car, etc. In keeping with the motto of the University of St. Gallen, “From Insight to Impact”, and the Grand Challenge being a research undertaking, the winning team is expected to use the prize money for further research (“insight”) or for putting its research into practice (“impact”).

We are making the Grand Challenge Rulebook available in two installments, namely Rulebook 1.0 and 2.0. Rulebook 1.0 contains sufficient information for potential teams and team members to decide whether to apply to become a Grand Challenge Team. It is available on SSRN as a basis for team applications until 22 March 2023. Rulebook 2.0 will be available well before the Boot Camp on 11 or 12 July 2023, be based on and expound on Rulebook 1.0, and contain the detailed rules that legally apply to and govern the Grand Challenge, in particular the Final.

Yes, absolutely. Implementation of and compliance with the EU AI Act raise questions of general nature that need to be answered on all continents, including the U.S., irrespective of the applicable legal framework. The experience Teams make in the Grand Challenge will likely scale up beyond the jurisdiction of the EU to other legal orders.

You might have joined us at the Proposers’ Day, our online informational event on 2 March 2023. The recording is available on our Youtube channel. You can still register for matchmaking. Note however that we will not allocate individuals to potential teams. We expect the Teams to form themselves. There is no point in sending us an application on 22 March 2023 stating that you want to be part of a Team. By that point, you need to be integrated as a member into the application of a prospective team in order to be considered for participation. We might offer suggestions to Teams that applied later, depending on the circumstances, but we will only do this after the application window has closed, i.e., after 22 March 2023.

We will disclose the composition of the Jury in due course. Some of the members of the Jury will be experienced in judging, others in research, but they will all be lawyers, i.e., persons trained and educated in the law (though not necessarily admitted to the bar). None of them will be experts of technology or computer science. The Jury will consist of 3-5 persons, with members having roots in Europe or North America. They will cover various disciplines of the law, including public and private (commercial) law. While the Grand Challenge AI assessments will be addressed to the AI producers, there is an inherent court of law aspect in that the Jury also considers whether the AI assessments would hold up in court. We kindly ask potential teams and team members to respect the code of ethics part of the Rulebook which precludes them from approaching members of the Jury outside of the formal avenues of the Grand Challenge. (The members of the Jury are now visible on the webpage)

The criteria we apply to select Teams are: expertise, credibility, and diversity in terms of gender, age, and general background. As the Grand Challenge is a research-driven undertaking, the team leader must have an affiliation with academia, but we intend to handle this somewhat flexibly. Given the complexities of the technology and the EU AI Act, teams should include professionals and practitioners, but we encourage them to include also a student to foster the educational dimension. Teams consist of 6 persons max. Some knowledge in computer science is probably required to conduct AI assessments successfully.

We expect Teams to be at the Final in person (18-19 July 2023) and hopefully at the Boot Camp. Depending on circumstances, we may offer a limited number of informational sessions with talks on substantive aspects that are relevant for the GrandChallenge, e.g., a progress report on the EU A.I. Act or the work in the standardizing bodies. These sessions will take place online in the period between the selection and the Final. They will in any case be short. In addition, we may occasionally have online contact with the Team leaders for updates. We are very much aware that you have other things to do than being a member of a Grand Challenge Team and do not intend to turn the Grand Challenge into a full time occupation for you. Essentially, we want to keep it within proportions that can be handled easily for someone with a full-time job. How much preparation Teams require individually to be up to the task of assessing AI applications, in turn, is hard for us to tell, as it depends on the level of expertise each Team has. Depending on the AI application (and the AI Provider), we may provide some information (a briefing or a short dossier) on this AI application ahead of the Final. We will do this, if we expect that an AI application is too complicated to understand in the limited time available at the Final. Again, we intend to keep this within proportions and Teams should receive notice about the timing well before the Final.

Team leaders’ affiliation to academia is necessary because we are part of a public university that can only do research (and education). Requiring Teams to be led by someone who is affiliated to academia anchors the Grand Challenge in research. Being in academia also subjects one to the rules of academia, including academic freedom, academic honesty, expense regulations, etc. From this it becomes evident that a Team leader’s affiliation to academia must be current. Ideally, a Team leader is a senior researcher with experience in the field and a position (job) in academia. Students do not qualify as Team leaders, as students are not typically part of the corps employed at university to do research. A post-doc may qualify, depending on the circumstances, as may aPhD-student, depending on the modalities of the PhD. We understand the term academia broadly, i.e., including research institutes (e.g., Max Planck institutes, Fraunhofer institutes, etc.), institutes of applied science (e.g., Institutes of applied sciences/ Fachhochschulen in the German-speaking countries), and private universities. There are various academic functions and roles that could, depending on the circumstances, suffice for qualification as a Team leader, e.g., being a honorary professor, visiting professor, external teacher, etc. In such a case, a team should provide some evidence with their application that the Team leader is subject to the rules of academia and that the academic institution concerned is aware and agrees to the Team leader appearing for the institution in the Grand Challenge, e.g., by means of a letter of support. A team can add such a document to their application, but for reasons of fairness, should keep such supporting documents to the absolute minimum necessary (1-2 pages). In case of doubt about whether a person qualifies as a team leader, please reach out via e-mail to thegrandchallenge@unisg.ch.

The idea of section 1.1 is that Teams should not try to find out who provides the AI technology to be assessed at the Grand Challenge. If you come across an AI Provider of the Grand Challenge, please do not insist or try to find further information with regard to the role of the entity you are facing in and the technology it provides to theGrand Challenge. However, the Grand Challenge is by no means meant to interfere with your normal work. If you have business with technology companies, being a Team in the Grand Challenge does not prevent you from continuing your work with these companies. If your business happens to be consulting technology companies on law and ethics of AI, the Grand Challenge does not prevent you from continuing this business. On the contrary, we encourage you to participate as a Team. You may also want to contact technology companies with a view to joining you in your Team and we encourage this, too. We just ask you to be careful when it comes to discussing the Grand Challenge with your clients, etc., especially once you are admitted to the Grand Challenge as a Team. You should not seek to acquire an unfair advantage over other Teams participating in the GrandChallenge.

We do not have firm expectations in this regard. You can briefly explain your approach, etc., in the cover letter if it serves to demonstrate your expertise. You can expect us to be familiar with the main public documents, in particular, the EU A.I. Act, so it is not necessary to go into details in this regard.

It will depend on the AI application to be assessed and the AI Provider putting it up for assessment. Hence access may vary from AI application to AI application. As organizers, we need to give our AI Providers the freedom to set their own limits in this regard.

Yes, absolutely.There are no constraints as to what Teams can use, as long as they use it within their Team, i.e., the Team does not collaborate with persons not part of their Team. However, Teams are invited to factor in that they will probably have less time to interact with AI Providers than they would have outside of a competition. There will probably not be more than 1 hour per AI Provider per team in the Final. This may limit the extent to which Teams can use questionnaires, etc., in their interaction with AI Providers. We expect the interaction between AI Providers and Teams to take place mostly orally during the time allocated (+/- 1h).

The details of the output have not yet been settled, but the idea is that Teams produce an aggregated assessment report, i.e., a report covering all the AI applications assessed in the Final, and hand this report in to the Jury. This report will be rather short. Teams would do well to keep in mind that the Jury needs to study all the Team reports during a limited amount of time that will probably not exceed a few hours. As to the specific AI assessments, the output is open. It is worth noting that some AI Providers participate in the Grand Challenge because they want to know whether and to which extent their AI application is in compliance with the EU AI Act. They are also open to advice on how to improve compliance. The Jury will also look at such advice. If such advice would have been necessary in the view of the Jury and it is not in an assessment report, the assessment will probably be considered weaker. The AI assessments Teams do during the Grand Challenge do not have to be done solely under the EU AI Act. Other frameworks (standards, etc.) may also be relevant for the assessment in the light of the EU AI Act, as long as they are compatible with it. (See section  4.2.8 of the Rulebook, "Assessment Report".)

The Grand Challenge does not assert any intellectual property rights with regard to Teams’ contributions, in particular, the tools Teams use to assess AI technology. For the Grand Challenge to function properly, specific AI assessments Teams make must be made available to us as organizers, our Strategic Partner (the Swiss Drone and Robotics Centre), AI providers, and the Jury. In keeping with the Grand Challenge being a research undertaking and in the interest of transparency of the award, Teams’ AI assessments will be shared among Teams that fully conclude the Grand Challenge Final. AI assessments will not be public, though, and Teams are not allowed to make other Teams’ assessment public. Teams are free to release their own assessments for the public to see unless the Rulebook precludes this for a specific AI application. As organizers, we will make public a report on our experience with the Grand Challenge in aggregated form, i.e.without the details of AI assessments. Finally, we do assert our rights with regard to the idea and concept of the Grand Challenge.

There is currently no prize for the runner-up Team. (According to section 2.1 of the Rulebook, the runner-up would win  5'000 CHF.)

We cannot answer this for sure yet, as we are currently putting together the AI applications to be assessed in the Final. We aim at about half a dozen AI applications for the Final. The amount of time available for each AI assessment is not yet clear. It will depend on the number of AI applications to be assessed, and their complexity. Most likely, not ever yAI assessment will take the same amount of time. (There were seven AI application and the time available was 30 minutes.)

We aim to have what we call a carry-over AI application, i.e., an AI application that Teams face for the first time during the Boot Camp on 11 or 12 July 2023 and then again at the Grand Challenge Final on 18-19 July 2023. This gives Teams several days to work on the assessment of the carry-over AI application. If there is a carry-over AI application, Teams or at least parts of Teams will obviously have an interest in being at the Boot Camp. That aside, we do not strictly expect Teams to be present in full at the BootCamp. Persons who are not strictly members of a Team, e.g., someone showing up instead of a Team member, will not be admitted to the Boot Camp (or the Grand Challenge Final). This partly answers why we are not making the Boot Camp mandatory. However, another aspect is that the Boot Camp takes place in the security context. The disaster relief scenarios offered at the ARCHE Days (see next answer) are organized by the disaster relief unit of the Swiss armed forces so that the civil research groups (mostly from Swiss Universities) can test their technology. Our Boot Camp is integrated into this exercise. As we are aware that not every team member may be comfortable with a security context, we refrain from making the Boot Camp mandatory and expecting presence of every single team member. (For the AI Providers see the webpage.)

We aim to have what we call a carry-over AI application, i.e. an AI application that Teams face for the first time during the Boot Camp on 11 or 12 July 2023 and then again at the Grand Challenge Final 18-19 July 2023. This gives Teams several days to work on the assessment of the carry-over AI application. If there is a carry-over AI application, Teams or at least parts of Teams will obviously have an interest in being at the Boot Camp. That aside, we do not strictly expect Teams to be present in full at the Boot Camp. Persons who are not strictly members of a Team, e.g., someone showing up instead of a Team member, will not be admitted to the Boot Camp (or the Grand Challenge Final).

The Boot Camp is organized by our StrategicPartner, the Swiss Drone and Robotics Centre. It will be part of the ARCHE Integration Week 2023 (https://www.ar.admin.ch/en/home.detail.news.html/ar-internet/news-2021/news-w-t/arche-2021.html) that focuses on disaster relief systems (ARCHE, Advanced Robotic Capabilities for Hazardous Environments). The ARCHE Integration Week 2023 will take place near Geneva. The Boot Camp is an occasion for Teams to practice some of their assessment skills before the Grand Challenge Final.

We are doing our best to cover expenses of Teams or at least some of them. As things currently stand, it is likely that basic accommodation and board at the Boot Camp will be covered, but Teams should not expect us to cover a significant part of their other expenses.

While we make every possible effort to raise funds also to cover some of the expenses of Teams we cannot guarantee anything at this point (see also the previous answer). We kindly ask Teams to try to raise their own funds to cover their travel and accommodation expenses. Hopefully, the fact of having been selected as a Team to participate in the Grand Challenge will enable Teams to make a credible pitch with potential funding institutions. However, we are ready to consider special cases, especially of Teams and or Team members from less developed regions. Please feel free to make an indication to this effect in the cover letter.

Basically, no. The code of ethics part of the Rulebook will preclude Teams and their members from collaborating with other Teams because this would undermine the competition. To form teams and apply for participation, interaction is obviously possible and necessary. Once selected, Teams should refrain from collaborating with other Teams.

 


Potential AI Providers

We will answer your questions individually and personally if you are a potential AI Provider. Please bear in mind that the success of the Grand Challenge hinges on Teams not knowing which AI technology they will assess in the competition. We therefore kindly ask you to keep everything about potential AI provision to the Grand Challenge confidential.

If you ask yourself, in particular, why you should participate as an AI provider in the Grand Challenge, we have an information sheet that discusses the advantages for you. We are happy to share it with you. Please contact us at thegrandchallenge@unisg.ch.



General questions

Essentially because we are situated in Switzerland. Though Switzerland is not a Member State of the EU, legislation of the EU such as the EU AI Act usually affects Switzerland almost like EU Member States. The Grand Challenge is a practical, research-driven undertaking that does not focus on political relationships, including those within the EU or between the EU and Switzerland or other third countries.

As the Grand Challenge focuses on research, we do not have “sponsors” like commercially oriented undertakings, such as sports, media, TV events, etc. The funding for the Grand Challenge stems from research institutions that become “Partners” of the Grand Challenge (see our homepage). If you are interested to become a Partner of the Grand Challenge, please contact us at thegrandchallenge@unisg.ch. We encourage commercially oriented undertakings, such as technology companies, law firms, standards organization, consulting firms, banks, insurance companies, etc. to sponsor a team or field a team themselves.

Which law applies to the Grand Challenge?

Any recourse to the courts of law is excluded. The Grand Challenge relies on the good faith of all participants, rather than challenges brought to courts. As organizers, we will at all times do our best to hear and heed participants’ legitimate concerns, in particular, those of Teams. The Rulebook 2.0, as it expands on Rulebook 1.0, is the authoritative legal framework for the Grand Challenge. Any question the Rulebook does not address is subject to Swiss law. Regular Swiss courts shall have jurisdiction to the extent that recourse to the courts of law cannot lawfully be excluded.


The call for participation we launched on 18 January 2023

The University of St. Gallen in Switzerland organizes a new public competition: “The First University of St. Gallen Grand Challenge: The EU A.I. Act 2023”.

We offer the prize of 100’000 CHF (approx. 100’000 USD/EUR) to the team that assesses best the compliance of A.I. technology with the upcoming A.I. Act of the European Union.

We invite potential teams and team members worldwide to participate in the Grand Challenge.

The main event of the Grand Challenge will be the Final on 18-19 July 2023 at SQUARE, University of St. Gallen, Switzerland.

The Final will be preceded by a Boot Camp on 11 or 12 July 2023.

On Thursday, 2 March 2023, 3 pm CET, we will host an online informational event, the “Proposers’ Day”, to provide details and guidelines for participants.

The Proposers’ Day also serves to foster and bring together teams (“matchmaking”).

To register for the Proposers’ Day send an e-mail to thegrandchallenge@unisg.ch stating your full name, occupation, and affiliation; you may add 1-2 sentences on your interest in the Grand Challenge.

More information on the Proposers’ Day, including the schedule, speakers, possibilities to talk with the organizers, etc., will follow soon on our webpage (www.thegrandchallenge.eu) or by e-mail for those who have registered for Proposers’ Day.

For further information on teams, the competition, requirements, background, etc., please see www.thegrandchallenge.eu or contact the founder, Professor Dr. Thomas Burri, the executive director, Viktoriya Zakrevskaya, and the organizing team at thegrandchallenge@unisg.ch.


Q&A with Thomas Burri, Founder of the Grand Challenge, November 2022

Viktoriya Zakrevskaya: Where did you find the inspiration for this project? How did you come up with the idea?

Thomas Burri: In 2015, I was at the famous DARPA Robotics Challenge in LA with my first PhD student. This was the Challenge with the humanoid robots. Ever since then, I have thought: wouldn’t it be great to do something like this? But it took me some time to figure out how to apply the idea of a Challenge, this special type of engineering competition, to the social sciences – without losing what makes these DARPA Challenges so unique and exciting.

VZ: What is the Grand Challenge? Why such a name?

TB: In our Grand Challenge, teams compete by assessing AI technology we provide in the light of new law. We want this to be big, so “grand”, not minor or small... [laughs]. But seriously, the first DARPA Challenge, that’s the one with the autonomous cars in 2004, was called the DARPA Grand Challenge. As we are doing the first of those in the social sciences, it seems fitting to go with an adaptation of it, the First University of St. Gallen Grand Challenge. But also, you know, it is not exaggerated to call the task of implementing the EU AI Act a grand challenge.

VZ:  What problem does the Grand Challenge solve?‍

TB: It sheds light on how this upcoming legislation, the Act of the EU on AI, can be implemented. The important point is that the Grand Challenge is not a theoretical thing. It is a hands-on endeavor involving real technology from the industry. And the main characteristic of this endeavor is that it is a competition. But the Grand Challenge may not only help with implementing the AI Act, it could also reveal deficiencies in the Act itself.

VZ:  What is the novelty of the project?

TB: I think what makes this novel is that we are having a competition to stress-test a new piece of legislation. Such a competition is a new idea. Of course, there is a competitive element in any court proceeding, but as far as I can see no one did such a challenge in the social sciences before. We are also thinking about other legislation [than the AI Act] that could be subject to such a Grand Challenge in the future.

VZ: Who benefits?

TB:  So many are bound to benefit! It is not just the team winning the Grand Challenge and taking the 100’000 CHF, but all teams who position themselves in a new market and sharpen their skills. Those who make available AI tech for the competition benefit from an early assessment and visibility. The University of St. Gallen and Switzerland become visible in AI and law. But these kinds of competitions can also kick-start entire new economic branches.

VZ: Why the public should care? Where?

TB: Well, you know I have just come back from the Swiss Robotics Days in Lausanne. It is just very exciting to have the latest AI and robotic tech at your hands. There is a special vibe, a pioneering spirit at these events. Ours will also have that and it will be tangible to the public. At the Darpa Robotics Challenge in 2015 there were thousands of spectators. Or, look at the ETH Cybathlon 2016: It filled the Zurich Hallenstadion! How amazing is that? I am not sure we will reach this scale, but who knows? But beyond that, the huge buzz around AI will even intensify around our event. And then: have you ever seen the enthusiasm of children when they see robots?

VZ: Which results do you expect to achieve? By when?

TB: The most important output of these kinds of events is hard to quantify. They create a spirit, a community, an excitement. And this can lead to wonderful things: Research cooperations, new ideas, new undertaking, new friendship. Of course, we will also have tangible output. We will have a report on the new methods developed for the assessment under the AI Act. As an academic, I expect that we can write several papers on this. It is also possible that the Grand Challenge leads to new findings that can be fed back into the EU legislative process. For companies, I think the most important benefit is that they get clarity on what the EU AI Act means for them, including recommendations, etc. With this, their engineers and tech scientist can continue their work without fearing that they “do something wrong” all the time. But for me personally, this is all more a sort of synergy. What motivates me to do this is the spirit of the event, the excitement of this new idea.

VZ: On a more critical note, who makes the assessment and how?

TB: We expect 6 teams consisting of experts in compliance, the law, but also technology. There are several methods out there, but they need to be improved. The Grand Challenge should be a catalyst for this.


VZ:  Who makes the selection of the teams?‍

TB: We don’t know yet how many teams will apply after the Proposers’ Day early in 2023. But we as organizers will make a careful selection together with two members of our Jury. There are objective criteria to select teams, like expertise, credibility, commitment, and diversity in terms of gender, age, and general background.


Q&A with Daniel Trusilo, advisor to the Grand Challenge, December 2022

Daniel Trusilo: There are still many outstanding questions, but one area of research that has been especially interesting for me is what happens when socio-technical systems that incorporate artificial intelligence are operated in an uncontrolled environment outside of the lab. For example, if you deploy a system in a dynamic, real-world environment with an open context and many unknown complications, does the possibility of what is called ‘emergent behavior’ increase? This refers to the possibility that the interaction of the system with its environment will exhibit surprising behavior, which has not been observed in a controlled lab environment. In other words, the system can act in new and surprising ways to achieve an objective that it has been given.

Much of my research is on the practical application of ethics to assess actual systems. It is fascinating to observe that sometimes the questions and the conversations that take place when evaluating the ethical and legal risks presented by a particular system are more important than the actual findings of such evaluations themselves. So perhaps it's not so important to identify a concrete solution to risk, but rather to have a conversation with the designers, developers, programmers, and users about various aspects of the systems so that they understand there is potential for risks that they may have never previously considered. Those conversations are essential because they shed light on what risks exist, what a practitioner may think about it in the real world, and what should be the application or use of a system in a complex real-life environment. Moreover, taking time to contemplate such questions will make all of the stakeholders better prepared to address risk-management considerations in the very design of the system.

Daniel Trusilo: On the one hand, every single engineer, designer, and programmer I've worked with has had good intentions. By this, I mean that the people I've worked with are not intending for the system that they're building to cause harm or be used in a way that may lead to harm. On the other hand, they have an overreliance on laws, regulations, and policies to bound the use of their systems. But, the development of laws and regulations is not keeping up with the rapid advancement of the technology itself. All this brings us to a place where in the absence of clear regulations there is an inherent responsibility on the designers and programmers that is difficult to fully address without conversations with philosophers, ethicists, and lawyers. Having a multidisciplinary approach helps identify and address multiple risks in the absence of clear laws and regulations.

Daniel Trusilo: That's a question that I have myself. An area of research that warrants more investigation is the concrete consequence or results of actually having conversations about the risks presented by AI systems. Do such discussions result in a change in the system development process? My experience has been that developers and engineers realize that there are a lot of important and interesting questions that need to be asked beyond simply solving an enginnering problem. Initially, people may push back and say “ethics is boring” or “I'm not really interested in that,” but as soon as one asks the right questions or presents ways systems can be misused or repurposed to cause harm -- then they realize that, actually, it's quite interesting to think about it and to also design systems in such a way that they are taking those risks into account.

Daniel Trusilo: There are two sides to this. On the one hand, there should be more focus on ethical considerations, on how to make responsible artificial intelligence, and the potential misuse of systems.  It’s a field that's growing. In fact, there's more and more interest and relevance every single day. In the academic world, 10 years ago there were a handful of people researching responsible and ethical artificial intelligence. Now you can see evidence of this everywhere and it's becoming a very hot topic. On the other hand, we need to identify scalable solutions to identifying risk and addressing risk in emerging technologies. Right now, you can have someone like me who's done years of research on ethical and responsible technology assess a system and identify the potential risks and maybe potential ways to address those risks. But that's not possible on a large-scale. One can imagine that in the near future the auditing of ethical risks presented by AI systems will be a service industry much like financial auditing is today. To be able to do such auditing in a scalable way, various institutions -- universities, research labs, and the organizations that want to use AI -- will have to develop new ways of assessing risk, and academia will naturally play its role in this process.


Q&A with Juliane Beck, advisor to the Grand Challenge, January 2023

Juliane Beck: The EU AI act aims to establish responsibility and accountability as a related step. This is most evident in article 14 EU AI Act on human oversight. According to this article, the individuals to whom human oversight is assigned (i.e., mainly the operator of the AI system) shall be able to understand the capacities and limitations of any high-risk AI system and interpret its output correctly. Further, the operator shall be enabled to disregard, override or reverse the system's output, intervene in the system's operation or interrupt it, and remain aware of the possibility of automation bias. All of this aims towards establishing human agency — a precondition for assigning responsibility and establishing accountability in case of system malfunctioning. However, scientific research has shown that we encounter immense practical difficulties in ensuring human oversight as spelled out by article 14 EU AI Act.

A further difficulty lies in the following: The EU AI Act does not provide a legal basis on which affected individuals may claim their rights if they have been negatively affected by an AI system. There are other legal acts on national and EU levels based on which affected human beings may bring their case to court. Yet, this makes it more complicated for those negatively affected by an AI system to claim their rights.

The GDPR, for example, grants data subjects certain rights (especially in Articles 15-22 GDPR). Further, data subjects may approach the supervisory authorities to lodge a complaint (see articles 57(1)(f), 77 GDPR). By contrast, the EU AI Act focuses on supporting and fostering innovation. The protection of EU values and fundamental rights seems to be only of secondary interest. This is potentially one of the main reasons the EU AI Act is constructed as an administrative measure.

Yet, it should be added that the AI liability directive complements the EU AI Act. However, this directive must be transposed into national law, meaning it is not directly enforceable.

JB: The EU AI Act should be able to protect fundamental rights and EU values. Therefore, a corresponding compliance system ought to be established.

The way the compliance system is designed in the Commission's Proposal for the EU AI Act is not sufficient to achieve this goal. According to the Proposal, it is mainly up to the provider to undertake an ex-ante assessment of whether a high-risk AI system complies with the requirements of Title III, Chapter 2 EU AI Act (see Articles 19, 43 EU AI Act). In most cases, this means a self-conformity assessment without external control. Thus, the EU AI Act confers broad discretion on AI providers in risk-related, fundamental-rights-sensitive decisions, while scrutiny lacks behind.

A further problem pertains to the enforcement level. The EU AI Act offers neither individual rights of remedy nor (collective) complaint mechanisms for those whose fundamental rights have been interfered with. This facet of the EU AI Act is in need of reconstruction and refinement. Individuals should not only be granted more procedural rights (meaning rights to complaint and redress) but also substantive rights (f.ex., a right to information that would allow affected individuals to claim their rights effectively).

JB: At the moment, regulation of AI in Europe is being crafted. It is preferable to have a unique approach to AI regulation encompassing all Member States of the European Union and not merely separate nation-state approaches. Against this backdrop, I am excited to see the Grand Challenge happen. It brings together scientists and practitioners when regulation of AI in the European Union is still in the making.

Yet, we must be aware that many proposals for AI regulation are on the table, and they come from different countries and regions worldwide. For example, proposals for AI regulation are drafted in the US and China. They differ from the European Union approach in many details, especially regarding the duties imposed on AI providers.

Despite the differences in the approaches to AI regulation — also stemming from cultural differences, including differences in the acceptance of technology — it is good to have regional building blocks forming. An international dialogue may be fostered based on the main parameters agreed upon on the regional level.

Of course, it is challenging to establish an international standard. However, it shouldn't prevent us from trying to reach a consensus on AI regulation, and this is where international law may play a role.

The time we need to reach this consensus depends on how quickly regional approaches to AI regulation progress and how far they lie apart. Solid regional approaches may be linked in a next step. Yet, international dialogue should take place in parallel, accompanying the regulatory initiatives on the regional level. It is pivotal to have ongoing discussions to address cultural differences and engage with different conceptions of how to regulate AI. Bridging gaps will certainly not be easy, especially given the fear of leading tech nations that too strict regulation of AI may impede technological progress.

VZ: Thank you, Juliane. I hope we will be able to contribute to this debate, also with the post-event activities.