Tech News
Meta moves to turn employee activity into AI training data with mandatory tracking software
Most major tech companies now talk about building artificial intelligence from “real human behaviour.” But Meta has taken that idea inside its own workforce—turning it into a concrete workplace reality that is already raising serious privacy concerns.
The company has begun rolling out internal software across its workforce that monitors how employees use their work computers, including tracking mouse movements, keystrokes, clicks, and in some cases periodic screenshots. The system is designed to capture detailed behavioural data while staff go about their normal daily tasks.
The tracking is not optional. Employees using company-issued devices are reportedly required to run the software, with no opt-out option available on work machines. The data is being collected as part of Meta’s broader push to train advanced AI systems that can better understand and replicate how humans interact with computers in real working environments.
The goal, according to internal communications reported in recent coverage, is to improve AI models in areas where they currently struggle—such as navigating software menus, using keyboard shortcuts, and performing multi-step office tasks. By observing real employee workflows, Meta aims to generate large-scale training data that reflects how people actually work in practice.
The move is part of a wider internal shift at Meta toward building autonomous “AI agents” capable of handling digital work tasks with minimal human input. The company is increasingly focusing on automation tools that could eventually perform portions of software development, administrative work, and internal operations.
However, the rollout has sparked discomfort among employees, with concerns centering on surveillance, consent, and workplace trust. Critics argue that even if the data is restricted to work-related applications, continuous monitoring of keystrokes and screen activity represents a significant expansion of corporate oversight.
Meta has reportedly maintained that the data is not used for performance evaluation and is collected solely for AI training purposes, with safeguards intended to protect sensitive information. Still, the absence of an opt-out mechanism has become a major point of tension internally.
The development also reflects a growing industry trend, where companies are increasingly using employee behaviour as a direct input into AI model training. Similar monitoring tools are already widely used in corporate environments for productivity tracking, but the scale and depth of data collection in Meta’s approach marks a new level of integration between workplace surveillance and AI development.
As AI competition intensifies across the tech sector, companies are racing to secure high-quality real-world data. For Meta, its own workforce has now become part of that data pipeline—blurring the line between employee activity and machine learning fuel.
The decision highlights a broader shift in how AI is being built: not just from public data or online behaviour, but increasingly from the private, everyday actions of workers inside the companies developing the technology itself.