Skip to Page Content

The Human Side of AI

    By Sarah Bennett, Skeo Solutions
    2025 Chapter Vice President

    As AI becomes increasingly prevalent in the workplace, ethical considerations are crucial. In the rush to implement AI, we must ask what considerations exist and how we can ensure AI is used responsibly. Three major concerns with AI usage in business are data privacy, deepfakes, and the need for responsible practices. HR departments can play a pivotal role in shaping policies that align with organizational values and uphold ethical standards.

    Data Privacy
    Data privacy is a core ethical and legal concern in the use of AI. Data powers AI systems, and data is often personal and sensitive. Companies are collecting and analyzing more detailed data than ever before, from employee performance to customer profiles. While this data can drive valuable insights, it also raises serious privacy concerns.

    What are some key considerations and risks?
    - Data Consent and Transparency: Stakeholders should be informed about the data being collected, how it will be used, and for what purposes. Storage and protection of the data should also be made known. Businesses should obtain explicit, informed consent for any data used in an AI system or for automated decisions.

    - Bias and Fairness: Training AI on biased or unrepresentative datasets can lead to discriminatory outcomes, particularly in recruitment or promotions. States like Texas have introduced laws governing the responsible use of AI for employment practices.

    - Data Minimization: Over-collection of data increases the risk of misuse and breaches. When entering sensitive or identifiable information into AI systems, it is advisable to approach the task with caution.

    What can HR do?
    - Collaborate with external and internal counsel, as well as technology professionals, to develop data use policies that address transparency, bias, and data collection standards.

    - Ensure compliance with data protection regulations such as GDPR, HIPAA, or CCPA.

    - Train HR teams on the ethical handling of AI-driven analytics platforms.

    Deepfakes
    Deepfakes are AI-generated media that appear realistic. Deepfakes can mimic a person’s appearance, voice, or create fictional scenarios. This media is sometimes used in creative endeavors, but it can be potentially harmful when shared as factual information without proper verification. In the context of business use, deepfakes can lead to the spread of misinformation, facilitate financial fraud, create legal risks, and damage a company’s reputation.

    What are some key considerations and risks?
    - Fraud: Impersonation of executives or HR personnel can be used in phishing scams or to temporarily manipulate stock prices.

    - Misinformation: Deepfake materials can create internal disruptions by circulating counterfeit messages to employees and cause brand confusion by impersonating legitimate brands.

    - Legal and Compliance Risks: Deepfakes create compliance and legal issues when they influence investors or defame individuals.

    What can HR do?
    - Develop a zero-tolerance policy against the creation or distribution of harmful synthetic media.

    - Establish multi-factor authentication and verification protocols.

    - Partner with IT to implement detection tools and deploy training on phishing and deep fake risks.

    Responsibility
    Using AI responsibly is about building systems that align with the company’s values and emphasize transparency, accountability, and fairness. AI tools that make decisions about hiring, promotions, or compensation must be scrutinized for fairness and transparency; otherwise, they should be avoided altogether for this purpose. People are less likely to trust systems they can’t understand, and hidden algorithms can also lead to legal trouble and mistrust. Algorithms should be transparent so that the decision process remains clear and open.

    What are some key considerations and risks?
    - Automation: Maintain human oversight in critical decisions and do not rely on AI.

    - Explainability: Only use AI systems that can clearly explain how decisions are made.

    - Accountability: Establish clear ownership of AI tools and their outputs.

    What can HR do?
    - Avoid over-reliance on AI screening tools that may unintentionally exclude candidates or create bias in hiring processes.

    - Train managers and staff to understand the limits of AI-driven performance reviews.

    - Encourage a consistent framework of AI usage through policies, training, and the adoption of ethical standards and shared language.

    Ethical AI isn’t only a tech issue; it’s a people issue. HR professionals are in a powerful position to lead with ethics at the forefront. A simple guidance to begin crafting your policy, practices, and strategy is to ask: how can we use AI in a way that respects our people and our mission? By weaving ethical principles into policy, training, and daily decision-making, HR can ensure that AI enhances, rather than compromises, the integrity of the workplace.