REPOZYTORIUM UNIWERSYTETU
W BIAŁYMSTOKU
UwB

Proszę używać tego identyfikatora do cytowań lub wstaw link do tej pozycji: http://hdl.handle.net/11320/11690
Pełny rekord metadanych
Pole DCWartośćJęzyk
dc.contributor.authorRejmaniak, Rafał-
dc.date.accessioned2021-10-11T05:58:49Z-
dc.date.available2021-10-11T05:58:49Z-
dc.date.issued2021-
dc.identifier.citationBiałostockie Studia Prawnicze, Vol. 26 nr 3, 2021, s. 25-42pl
dc.identifier.issn1689-7404-
dc.identifier.urihttp://hdl.handle.net/11320/11690-
dc.description.abstractArtificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities.pl
dc.language.isoenpl
dc.publisherWydział Prawa Uniwersytetu w Białymstoku, Temida 2pl
dc.rightsUznanie autorstwa-Użycie niekomercyjne-Bez utworów zależnych 3.0 Unported*
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/3.0/deed.pl*
dc.subjectAI discriminationpl
dc.subjectAI fairnesspl
dc.subjectalgorithmic biaspl
dc.subjectartificial intelligencepl
dc.titleBias in Artificial Intelligence Systemspl
dc.typeArticlepl
dc.rights.holderUznanie autorstwa-Użycie niekomercyjne-Bez utworów zależnych 3.0 Unportedpl
dc.identifier.doi10.15290/bsp.2021.26.03.02-
dc.description.Emailr.rejmaniak@uwb.edu.plpl
dc.description.BiographicalnoteRafał Rejmaniak is Assistant Professor in the Department of Historical and Legal Sciences, Theory and Philosophy of Law, and Comparative Law at the Faculty of Law, University of Białystok, Poland.-
dc.description.AffiliationUniversity of Białystok, Polandpl
dc.description.referencesAngwin J., Larson J., Mattu S. and Kirchner L., Machine Bias, ProPublica 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.pl
dc.description.referencesBarfield W. and Pagallo U., Advanced Introduction to Law and Artificial Intelligence, Cheltenham/Northampton 2020.pl
dc.description.referencesBarocas S. and Selbst A.D., Big Data’s disparate impact, “California Law Review” 2016, vol. 104, no. 2.pl
dc.description.referencesBerendt B., Preibusch S., Toward accountable discrimination-aware data mining: The importance of keeping human in the loop – and under the looking-glass, “Big Data” 2017, vol. 5, no. 2.pl
dc.description.referencesBoden M.A., Sztuczna inteligencja. Jej natura i przyszłość, trans. T. Sieczkowski, Łódź 2020.pl
dc.description.referencesBorysiak W. and Bosek L., Komentarz do art. 32, (in:) M. Safjan and L. Bosek (eds.), Konstytucja RP. Tom I. Komentarz do art. 1–86, Warsaw 2016.pl
dc.description.referencesBrennan T., Dieterich W. and Ehret B., Evaluating the predictive validity of the COMPAS risk and needs assessment system, “Criminal Justice and Behavior” 2009, vol. 36, no. 1.pl
dc.description.referencesCataleta M.S. and Cataleta A., Artificial Intelligence and Human Rights, an Unequal Struggle, “CIFILE Journal of International Law” 2020, vol. 1, no. 2.pl
dc.description.referencesCoeckelbergh M., AI Ethics, Cambridge/London 2020.pl
dc.description.referencesCummings M.L., Automation and Accountability in Decision Support System Interface Design, “The Journal of Technology Studies” 2006, vol. 32, no. 1.pl
dc.description.referencesDanks D. and London A.J., Algorithmic Bias in Autonomous Systems, ‘Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017)’, https://www.cmu.edu/ dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf.pl
dc.description.referencesDavenport T. and Kalakota R., The potential for artificial intelligence in healthcare, “Future Healthcare Journal” 2019, vol. 6, no. 2.pl
dc.description.referencesDymitruk M., Sztuczna inteligencja w wymiarze sprawiedliwości? (in:) L. Lai and M. Świerczyński (eds.), Prawo sztucznej inteligencji, Warsaw 2020.pl
dc.description.referencesEuropean Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)).pl
dc.description.referencesFjeld J., Achten N., Hilligoss H., Nagy A. and Srikumar M., Principled Artifi cial Intelligence. Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI, Cambridge 2020.pl
dc.description.referencesFlasiński M., Wstęp do sztucznej inteligencji, Warsaw 2020.pl
dc.description.referencesFry H., Hello world. Jak być człowiekiem w epoce maszyn, trans. S. Musielak, Krakow 2019.pl
dc.description.referencesGerman S., Bienstock E. and Doursat R., Neural networks and bias/variance dilemma, “Neural Computation” 1992, vol. 4, no. 1.pl
dc.description.referencesHacker P., Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law, “Common Market Law Review” 2018, vol. 55.pl
dc.description.referencesHigh-Level Expert Group on Artificial Intelligence (appointed by the European Commission in June 2018), A Defi nition of Artificial Intelligence: Main Capabilities and Scientifi c Disciplines, Brussels 2019.pl
dc.description.referencesHigh-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, Brussels 2019.pl
dc.description.referencesJernigan C. and Mistree B.F., Gaydar: Facebook friendships expose sexual orientation, “First Monday” 2009, vol. 14, no. 10.pl
dc.description.referencesKasperska A., Problemy zastosowania sztucznych sieci neuronalnych w praktyce prawniczej, „Przegląd Prawa Publicznego” 2017, no. 11.pl
dc.description.referencesLattimore F., O’Callaghan S., Paleologos Z., Reid A., Santow E., Sargeant H. and Thomsen A., Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Technical Paper, Australian Human Rights Commission, Sydney 2020.pl
dc.description.referencesMassey G. and Ehrensberger-Dow M., Machine learning: Implications for translator education, “Lebende Sprachen” 2017, vol. 62, no. 2.pl
dc.description.referencesMichie D., Methodologies from Machine Learning in Data Analysis and Soft ware, “The Computer Journal” 1991, vol. 34, no. 6.pl
dc.description.referencesNeff G. and Nagy P., Talking to Bots: Symbiotic Agency and the Case of Tay, “International Journal of Communication” 2016, no. 10.pl
dc.description.referencesNtoutsi E., Fafalios P., Gadiraju U., Iosifidis V., Nejdl W., Vidal M.-E., Ruggieri S., Turini F., Papadopoulos S., Krasanakis E., Kompatsiaris I., Kinder-Kurlanda K., Wagner C., Karimi F., Fernandez M., Alani H., Berendt B., Kruegel T., Heinze Ch., Broelemann K., Kasneci G., Tiropanis T. and Staab S., Bias in data-driven artificial intelligence systems – An introductory survey, “WIREs Data Mining Knowledge Discovery” 2020, vol. 10, no. 3.pl
dc.description.referencesO’Neil C., Broń matematycznej zagłady. Jak algorytmy zwiększają nierówności i zagrażają demokracji, trans. M. Z. Zieliński, Warsaw 2017.pl
dc.description.referencesRaji I.D., Buolamwini J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, ‘Conference on Artificial Intelligence, Ethics, and Society’ 2019, https://www.media.mit.edu/publications/actionable-auditing-investigatingthe-impact-of-publicly-naming-biased-performance-results-of-commercial-ai-products/.pl
dc.description.referencesRibeiro M.T., Singh S. and Guestrin C., „Why Should I Trust You?” Explaining the Predictions of AnyClassifier, “22nd ACM SIGKDD International Conference 2016, San Francisco”, https://www. kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf.pl
dc.description.referencesRodrigues R., Legal and human rights issues of AI: Gaps, challenges and vulnerabilities, “Journal of Responsible Technology” 2020, vol. 4.pl
dc.description.referencesRoselli D., Matthews J., Talagala N., Managing Bias in AI, “Companion Proceedings of the 2019 World Wide Web Conference, San Francisco, CA, USA”, May 2019.pl
dc.description.referencesRutkowski L., Metody i techniki sztucznej inteligencji, Warsaw 2012.pl
dc.description.referencesWhite Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, Brussels 2020.pl
dc.description.referencesYapo A. and Weiss J., Ethical Implications of Bias In Machine Learning, “Proceedings of the Annual Hawaii International Conference on System Sciences” 2018.pl
dc.description.referencesZuiderveen Borgesius F., Discrimination, artificial intelligence and algorithmic decision-making, Council of Europe, Strasbourg 2018.pl
dc.description.volume26pl
dc.description.number3pl
dc.description.firstpage25pl
dc.description.lastpage42pl
dc.identifier.citation2Białostockie Studia Prawniczepl
dc.identifier.orcid0000-0003-1908-5844-
Występuje w kolekcji(ach):Artykuły naukowe (WP)
Białostockie Studia Prawnicze, 2021, Vol. 26 nr 3

Pliki w tej pozycji:
Plik Opis RozmiarFormat 
BSP_26_3_R_Rejmaniak_Bias_in_Artificial_Intelligence_Systems.pdf210,36 kBAdobe PDFOtwórz
Pokaż uproszczony widok rekordu Zobacz statystyki


Pozycja ta dostępna jest na podstawie licencji Licencja Creative Commons CCL Creative Commons