The Limitations and Challenges of AI in HR and Legal Applications


Artificial Intelligence (AI) has made significant advancements in various fields, including HR and legal applications. One notable AI model is Microsoft 365 Copilot, an example of Language Models (LLMs), which aims to assist in answering queries and providing information in these domains. However, as users have explored the capabilities of LLMs like Microsoft 365 Copilot, they have identified several limitations and challenges that need to be addressed.

The HR Bot Conundrum

One user shared their experience with an HR bot that utilized a similar AI model. While the bot was able to answer high-level questions about the corporation, it struggled to provide accurate responses to specific fact-based questions. The user highlighted that when asking about policies in the employee handbook, the bot refused to provide answers.

This challenge resonated with another user who mentioned working with an HR bot in a corporate setting. They stressed the legal minefield surrounding HR-related queries and the importance of accurate and considerate responses, as anything provided by the bot could potentially be legally binding. Specific prompts related to entitlements, promotions, and disciplinary actions require nuanced understanding that current LLMs may not possess.

The discussion prompted another user to reflect on the legal implications of AI responses. While legal teams suggest that responses from AI systems can be considered legally binding, the user questioned whether any actual law supports this claim. Another user chimed in, explaining that while the judge ultimately decides the weight of AI-generated evidence, companies cannot blanketly assume that these responses hold no legal bearing.

Filtering and Safety Concerns

To mitigate potential risks, companies often implement filters and safety measures on AI models. However, users expressed concerns about the impact of such filters. One user shared that these filters can limit the utility of AI applications by blocking certain types of queries. For instance, questions related to sensitive topics like harassment may be filtered, hindering the ability to obtain necessary information.

Another user raised the issue of hackers finding ways to bypass these filters and generate nefarious responses from the AI models. They drew parallels to the history of computer hacking, where individuals constantly found ways to circumvent security measures. Despite the efforts to restrict AI capabilities, people are resourceful in finding workarounds.

Balancing Social Responsibility and Free Speech

The issue of filtering and controlling AI responses sparked a discussion about the delicate balance between social responsibility and free speech. Some users argued that restricting AI models from generating racist or offensive content is necessary to prevent the spread of harmful ideas and protect public discourse. Others questioned whether personal computers should have the freedom to generate racist responses if explicitly instructed.

A user highlighted the importance of public policy in handling this balance. While LLMs have great potential, their widespread use could have unintended consequences. Ensuring that AI models are not misused to dominate public discourse or spread harmful ideologies becomes crucial.

The Role of User Responsibility and Accountability

One perspective shared among users is the consideration of user responsibility and accountability. Instead of solely blaming the bot-maker or AI model for generating offensive content, the responsibility lies with the person providing the prompts. This perspective emphasizes the need for users to exercise ethical judgment when interacting with AI models and refrain from exploiting them to generate harmful or offensive responses.

In conclusion, while AI models like Microsoft 365 Copilot show promise in HR and legal applications, they face several limitations and challenges. Specific knowledge requirements, legal implications, filtering concerns, and the balance between social responsibility and free speech need to be addressed. As AI continues to evolve, it is crucial to have thoughtful discussions and implement responsible practices to harness the full potential of AI in these domains.

Latest Posts