The word Artificial Intelligence is all the buzz in today’s tech. LLMs or Limited Language Models have the potential to do great things, allowing for efficiencies and focus that haven’t existed before. But, with as much good as AI can accomplish, it can also be a huge force for unethical behaviors and outcomes. For example, Artificial intelligence, can make existing social inequalities worse. AI can cause more unfairness in housing, education, and healthcare. AI favors the average or typical. This can lead to less support for people who are disabled In this session, I will explains how AI’s focus on statistics can ignore the needs of individuals who don’t fit the norm and how we as inclusive designers can be a force for making sure that AI is only a force for positive, inclusive outcomes.
This is a workshop that anyone can use when creating a new design, product, process, or policy. Each table will be given a design topic from the WUD examples e.g., Public Spaces, Product Packaging and a “How might we” statement starter. For example, “HMW design an urban transportation system integrating vehicles, public transit, and non-powered mobility?”
Participants will then utilize three methods such as: “What’s on Your Radar”, “Customer, Employee, Shareholder”, and “Consequences Scanning” to explore what they would consider important, what others would, and the broader impact a solution might have on marginalized populations, broader society, and the environment. After each method participants will be asked to briefly discuss their perspective’s similarities and differences.
We’ll wrap up with what next steps could look like after using these methods in the real world and a brief Q&A.