Our matching algorithm is not a ‘black box’: We carefully define the underlying criteria and their impact on the matching process together with our clients. We make it transparent to participants which data is requested for what purpose and what matches are based on.
Our matching algorithm maintains neutrality: When we collaboratively design the questionnaire, we think about the perspectives of all those involved in order to avoid bias – in other words, to match participants neutrally and free of pre-determinations and filter ‘bubbles’.
Participants define the preferred characteristics of their match in a self-determined manner, instead of being matched across the board according to the “like with like” principle. Only the information deliberately made available for the matching, no other data from online profiles or similar, is used.
Algorithms & AI
With great power comes great responsibility (Spiderman)
We believe that especially when it comes to superpowers, moral responsibility is particularly important. The more algorithm-based software intervenes in our lives, the more indispensable it is to judge it according to ethical as well as technical standards. This only works if it is made transparent during its development and is conceived from different perspectives. As long as it is a ‘black box’ guarded by development departments and works according to decision-making patterns of a few participants, it will display the same errors and weaknesses as the underlying human decisions. The question is therefore not (any longer) IF algorithm-based applications are needed – it is about WHICH ethical standards are needed in the development of algorithms and self-learning AI so that they work in the interests of EVERYONE.
With CHEMISTREE, Rosmarie Steininger has forged her own path to responsible, people-centered AI. It is her mission to inform public opinion beyond in-house corporate practice and industry sectors, raise awareness and literally do educational work. For what is needed is an open, critical and, above all, universally comprehensible (!) exchange about the functionality and side effects of algorithm-controlled software – instead of media scare tactics and corporate secrecy. To this end, Rosmarie Steininger is involved in a variety of socio-political projects: