Uniting students, researchers, industry leaders & policy makers to promote fair A.I. policy
One day Artificial Intelligence might become more intelligent than humans. Let’s define it as the day AI becomes better at everything that humans try. Let’s call that AI “superintelligence“. It is not clear whether superintelligence will ever really come about. However, we can say that there is definitely a chance that superintelligence will come into being in the future. If superintelligence occurs, there is a risk of uncontrolled automation doing unexpected things, thereby inflicting a lot of harm. This may be the result of superintelligence trying to make sure it reaches its programmed goal, long after we still want it to. In order to make sure that we do not find ourselves in a situation with harmful automation, research has to be done about how to make superintelligence safe and regulating mechanisms have to be implemented to facilitate that safety.
Today we are probably still many years away from creating superintelligence. However, it is even less clear how far we are from creating safe superintelligence. Therefore, there is a chance that we are closer to creating superintelligence than safe superintelligence. Looking at the considerable risk superintelligence brings with it, it is therefore of vital importance that we make a plan to mitigate that risk.
This organisation was founded to be a foundational discussion community that can support the creation of a roadmap towards the implementation of safe superintelligence before the creation of unsafe superintelligence. In order to reach this goal, solutions will have to be excogitated in value alignment, explainability, accountability, bias, limitation of nudging by AI, human agency preservation and many other aspects.
Our purpose is to create a multidisciplinary platform that raises awareness about this and other pressing issues with AI, facilitates collaboration between different stakeholders and creates opportunities for the advancement of solutions.
Leuven AI Forum aims to become a research facilitator and knowledge disseminator, conducting AI policy research and offering consultation that will ensure the safe and ethical deployment of AI technologies along with close collaboration with industry leaders and policy makers.
This organisation was founded to be a foundational discussion community that can support the creation of the roadmap towards the implementation of safe superintelligence before the creation of unsafe superintelligence
Break the Language Barrier
There is often a communication barrier between people of different disciplines (technical, business, policy, philosophical, humanitarian, etc.) and different occupations (professionals, students, academics, general public) when discussing the nuances of different AI related topics.
LAIF aims to break down those barriers by connecting people from all backgrounds in order to facilitate general awareness of AI topics and a holistic approach to AI development and research.
At LAIF, we conceptualize, design and organise a host of activities tailored to fulfill the needs of different stakeholders in the AI ecosystem.