Skip to Main Content

AI at Auraria Library

Privacy

Each company that creates or uses an AI tool defines how they protect user information and privacy. You must read their privacy policy to ensure your use of their tool will not expose your personal information. Most companies train their AI tools on what and how users are using the product.

Always follow the policies for your institution.

 

For data privacy, keep in mind:

  • Handling sensitive data requires strict adherence to privacy laws and ethical guidelines. 

  • Ensuring anonymization and secure storage of data is essential. 

 

Works Cited

"Please provide an outline of what college students should consider when using AI in research. I want the limitations and ethical issues as well as the benefits.” Copilot: AI Assistant. Microsoft, 16 Oct. 2024, copilot.microsoft.com

Transparency and Accountability

When using AI tools, it's important to be aware of several transparency and accountability issues:

Transparency Issues

  • Understanding Decision-Making
    • AI systems often operate as "black boxes," making it difficult to understand how they arrive at specific decisions. 
    • This lack of transparency can lead to mistrust and uncertainty.
  • Data Usage
    • Knowing what data is being used, how it's collected, and how it's processed is crucial. 
    • Transparency in data usage helps ensure that AI systems are not biased or discriminatory.
  • Algorithmic Bias
    • Transparency in the development and training of AI models can help identify and mitigate biases that may affect the fairness and accuracy of the system.

Accountability Issues

  • Responsibility Assignment
    • Clear mechanisms must be in place to assign responsibility when AI systems cause harm or make errors. 
    • This includes defining who is accountable—developers, users, or organizations.
  • Regulatory Compliance
    • Ensuring that AI systems comply with existing laws and regulations is essential. 
    • This includes adhering to data protection laws and ethical guidelines.
  • Oversight and Monitoring
    • Continuous monitoring and auditing of AI systems are necessary to ensure they operate as intended and to address any issues that arise.

Addressing these issues involves a combination of technical, legal, and ethical approaches to ensure AI systems are developed and deployed responsibly. How do you plan to use AI tools in your work or daily life?

 

Works Cited

"What are the transparency and accountability issues I should be aware of when using AI tools?" prompt. Copilot: AI Assistant, Microsoft, 9 Jan. 2025,  copilot.microsoft.com.

 

Bias and Fairness

Bias and fairness in AI are crucial considerations as AI systems are increasingly integrated into decision-making processes that impact various aspects of society, including hiring, law enforcement, healthcare, finance, and education. Here's an overview of the key concepts:

Bias in AI

AI bias occurs when an AI system reflects or amplifies prejudices or inequalities present in the data used to train it. This can happen unintentionally and may stem from various sources:

Sources of Bias:

  • Data Bias: If the data used to train an AI model is unrepresentative or skewed, the model can inherit those biases. For example, if an AI system is trained on historical data that reflects societal biases, it may perpetuate those same biases in its predictions or decisions.

    • Example: If a facial recognition system is trained primarily on images of light-skinned individuals, it might perform poorly on darker-skinned individuals, demonstrating racial bias.
  • Sampling Bias: If certain groups are over represented or underrepresented in the training data, the AI system may fail to generalize well for those groups.

    • Example: A medical AI system trained mostly on data from one demographic group may be less accurate for other groups with different genetic predispositions or health conditions.
  • Label Bias: The way labels or outcomes are assigned during training can reflect human biases. For example, if human annotators have implicit biases, these biases can be introduced into the training data.

  • Algorithmic Bias: Sometimes, the algorithms themselves may develop biases due to the way they process and prioritize data. Certain patterns or correlations might be given more weight than others based on the design of the model.

Types of Bias in AI:

  • Prejudicial Bias: Directly reflects discriminatory practices or prejudices based on race, gender, age, etc.
  • Measurement Bias: Occurs when the tools or processes used to collect data are skewed, causing misrepresentation of certain groups.
  • Exclusion Bias: Occurs when important data points or perspectives are excluded from the training set, leading to unfair predictions.

Fairness in AI

Fairness refers to ensuring that AI systems provide equitable outcomes for all groups, especially marginalized or historically disadvantaged populations. The concept of fairness in AI is multifaceted and involves addressing how decisions made by AI systems impact different groups.

Challenges in Achieving Fairness

Achieving fairness in AI is complex and involves balancing different competing fairness goals. For example:

  • Fairness vs. Accuracy: Striving for fairness may sometimes reduce the overall accuracy of the model. For example, if you adjust a model to be fairer to one group, it might end up being less accurate for others.
  • Fairness vs. Interpretability: Some AI models (like deep learning networks) are complex and not very interpretable. It can be difficult to ensure fairness in these models, and the lack of transparency can lead to unjust outcomes.
  • Dynamic and Evolving Data: Bias in AI models can evolve as new data is introduced. This means that fairness interventions must be continuously monitored and adapted.

 

Works Cited

“Bias and fairness in AI” prompt. ChatGPT, OpenAI, 6 Jan. 2025, chat.openai.com.

Intellectual Property

Artificial intelligence tools and large language models are trained on content that already exists. This content includes text, image, music, videos, etc. and is the intellectual property of the human creator. There are hundreds of lawsuits from artists and authors about the unauthorized use of their works in training AI tools.

Major concerns include:

  • Properly attributing AI-generated content and respecting copyright laws is crucial. 

  • Understanding the ownership of AI-generated data and results is important. 

 

Works Cited

“Please provide an outline of what college students should consider when using AI in research. I want the limitations and ethical issues as well as the benefits.” Copilot: AI Assistant. Microsoft, 16 Oct. 2024, copilot.microsoft.com

Discover Lawsuits About Intellectual Property and AI

Environmental Impact

Artificial intelligence systems use an extraordinary amount of energy. Concerns include:

High energy use

  • High-performance GPUs and data centers require substantial materials and energy to manufacture and maintain. 

Carbon footprint

  • Training large AI models can consume vast amounts of electricity, often generated from fossil fuels, leading to significant carbon emissions

Resource depletion

  • High-performance GPUs and data centers require substantial materials and energy to manufacture and maintain. 

 

 

Works Cited

"Can you elaborate on the ethical issues of the vast amounts of energy used by AI?" Copilot: AI Assistant. Microsoft, 16 Oct. 2024, copilot.microsoft.com

 

Learn More about Energy Use and AI