Uniting students, researchers, industry leaders & policy makers to promote fair A.I. policy
Let’s call that AI superintelligence. It is not clear whether superintelligence will ever really come about, however, we can say that there is definitely a chance that superintelligence will come into being in the future. If superintelligence occurs, there isa risk of uncontrolled automation doing unexpected things, thereby inflicting a lot of harm. This may be the result of superintelligence trying to make sure it reaches its programmed goal long after we still want it to. In order to make sure that we don’t find ourselves in a situation with harmful automation, research has to be done about how to make superintelligence safe.
Today we are probably still many years away from creating superintelligence. However, it is even less clear how far we are from creating safe superintelligence. Therefore, there is a chance that we are closer to creating superintelligence than safe superintelligence. Looking at the considerable risk superintelligence brings with it, it is therefore of vital importance that we make a plan to mitigate that risk.
This organisation was founded to be a foundational discussion community that can support the creation of a roadmap towards the implementation of safe superintelligence before the creation of unsafe superintelligence.
Our purpose is to create a multidisciplinary platform that raises awareness about pressing issues around AI, facilitates collaboration between different stakeholders and creates opportunities for the advancement of solutions.
LAIF aims to become a research facilitator, conducting AI policy research that will ensure the safe and ethical deployment of AI technologies along with close collaboration of industry leaders and policy makers.
Our mission and activities are built upon the curious and active student and academic communities in Leuven and the rest of Belgium.
This organisation was founded to be a foundational discussion community that can support the creation of the roadmap towards the implementation of safe superintelligence before the creation of unsafe superintelligence
Break the Language Barrier
There is often a communication barrier between people of different disciplines (technical, business, policy, philosophical, humanitarian etc) and different occupations (professionals, students, academics, general public) when discussing the nuances of different AI related topics.
LAIF aims to break down those barriers by connecting people from all backgrounds in order to facilitate general awareness of AI topics and a holistic approach to AI development and research.
At LAIF, we conceptualize, design and organise a host of activities tailored to fulfill the needs of different stakeholders in the AI ecosystem.