For years, robotics labs have used a simple, effective method to teach their machines how to interact with the physical world. They hire human workers to perform thousands of repetitive tasks – picking up boxes, opening doors, stacking cups – while sensors record every movement. The resulting data becomes training fuel for robots that learn, slowly, to mimic human hands and eyes.
It is tedious, expensive, and ethically straightforward. The human subjects are consenting contractors, paid by the hour, fully aware that their motions are being captured.
Now, Meta has imported that playbook to the digital world. Except the human subjects are not contractors. They are employees. And they are not being asked to consent.
On April 21, Meta launched the Model Capability Initiative (MCI) , a mandatory program that records screenshots, keystrokes, and mouse activity on U.S. employees' work laptops. The data is being used to train Meta’s AI models – specifically, to teach them how humans actually use software: selecting from dropdown menus, using keyboard shortcuts, navigating between applications, and filling out forms.
The program has no opt-out. According to an internal memo obtained by Business Insider, Meta CTO Andrew Bosworth responded to employee concerns with a blunt statement: “There is no option to opt out of this on your work-provided laptop.”
The backlash was immediate and visceral. On Meta’s internal workplace forum, the top-rated comment read: “This makes me super uncomfortable. How do we opt out?” The “angry-face” emoji became the most common reaction to the original announcement. Bosworth’s response was met with a mix of crying, shocked, and angry-face emojis.
But the discomfort is not just about privacy. It is about timing. The MCI program is rolling out to approximately 8,000 Meta staffers who are set to exit the company on May 20 – part of ongoing layoffs and restructuring. Their workflows are being logged for a full month before their last day. They are being asked to train the very AI that may, in some indirect way, help automate their former roles.
“It feels like being asked to dig your own grave,” one Meta employee, who spoke on condition of anonymity for fear of retaliation, told me. “They’re recording everything I do in my final weeks, knowing I’m about to be gone. And I can’t say no. It’s dystopian.”
Meta, for its part, frames the initiative as a collective responsibility. An internal announcement, signed by the MCI team, reads: “This is where all Meta employees can help our models get better simply by doing their daily work.”
But for the thousands of workers caught in the dragnet – especially those already packing up their desks – the message sounds less like an invitation and more like a confiscation.
Part I: The MCI – What It Captures, and Who It Captures From
The Model Capability Initiative is not a small pilot or an opt-in research study. It is a broad, mandatory rollout targeting all U.S.-based full-time employees and contingent workers. According to the internal memo, the software captures:
Keystrokes: Every key pressed, including shortcuts, text entry, and commands.
Mouse movements: Cursor trajectories, click locations, hover times, and scroll patterns.
Screen content: Screenshots of active applications, capturing what the employee sees.
Application context: Which apps are open, which windows are active, and the sequence of user actions.
Importantly, the scope is “limited to a pre-approved list of work-related applications and URLs.” That list, according to the memo, includes:
Gmail and Google Chat (work accounts)
Metamate (Meta’s internal AI assistant)
VSCode (Microsoft’s popular code editor)
Other unspecified “work-related” tools
The capture scope skews heavily toward developers – VSCode is on the list, and the emphasis on keyboard shortcuts and dropdown menus suggests a focus on software engineering workflows. But the inclusion of Gmail and Google Chat means that non-technical employees are also fully exposed. Every email drafted, every chat message sent, every calendar invitation accepted is potentially being recorded and fed into Meta’s AI training pipeline.
“The fact that Gmail and GChat are on the list is staggering,” said Elena Torres, a privacy lawyer who has worked on workplace monitoring cases. “Those are not just productivity tools. They contain confidential communications with external partners, legal advice from counsel, personal information about employees. Capturing that for AI training – even with ‘safeguards’ – is legally perilous.”
Meta has stated that the data is “not used for any other purpose” and that “privacy safeguards” are in place. But the company has not detailed what those safeguards are. Does the system automatically redact personal information? Are screenshots stored encrypted? Who has access? How long is the data retained? The internal FAQ, which employees were directed to for answers, has not been shared publicly.
Part II: The Layoff Context – Training AI on the Way Out
The most ethically fraught aspect of the MCI rollout is its timing relative to Meta’s ongoing workforce reduction. Approximately 8,000 staffers are set to exit on May 20 – a number that has been previously reported as part of Meta’s “efficiency year” restructuring. The MCI program began logging their workflows on or around April 21, giving the company a full month of data capture before their departure.
For these employees, the message is unmistakable: you are leaving, but your labor – even your keystrokes – still belongs to us until the very last second. And we will use that labor to build AI that may one day replace the work you used to do.
“There is a special kind of cruelty in monitoring people more intensely after they’ve been told they’re leaving,” said Dr. Sarah Klein, an organizational psychologist who studies workplace surveillance. “It creates a bizarre double bind. You want to do your job well enough to get a good reference or a severance package. But every click you make is now being harvested for a system that your employer values more than you. That is psychologically devastating.”
Several affected employees shared their reactions on anonymous forums like Blind and Reddit. One post, which received thousands of upvotes, read: “I’m on the May 20 list. I’ve spent 4 years at Meta. My last month, instead of winding down or doing knowledge transfer, I’m being recorded like a lab rat. I can’t even opt out. I feel like a prisoner.”
Another wrote: “They told us this was to ‘help our models get better simply by doing our daily work.’ But I’m not ‘our.’ I’m gone in a month. I’m helping a company that just fired me train my replacement. That’s the real ‘model capability’ – the ability to make me feel worthless.”
Meta declined to comment on the specific timing of the MCI rollout relative to the May 20 departures. A spokesperson reiterated that “all employees on work-provided laptops are subject to the same policies” and that “the program is designed to improve our AI for everyone’s benefit.”
Part III: The Internal Backlash – From Unease to Anger
The internal response to the MCI announcement has been unlike anything Meta has seen since the company’s early Facebook days. On the workplace communications site, the original post received hundreds of comments – a high volume for an internal announcement – with the overwhelming sentiment being negative.
The top comment, “This makes me super uncomfortable. How do we opt out?” was upvoted by more than 1,500 employees. The second-highest comment read: “This feels like a violation of basic trust. We know you monitor our devices. But this is different. This is active, continuous, and feels invasive.”
Bosworth’s response – “there is no option to opt out” – only intensified the backlash. One employee replied directly to Bosworth: “So we have the choice to quit? Is that the opt-out?” Another wrote: “You realize this is going to destroy morale, right? People are going to stop using work laptops for anything beyond the absolute minimum. You’ll get garbage data.”
A third comment, which was later deleted by its author but screenshotted and shared externally, read: “I’ve been here 8 years. I’ve never felt like a cog until now. This is the moment I realized I’m not an employee. I’m a training data source.”
The “angry-face” emoji was applied to the original post more than 2,000 times – a record for any internal Meta announcement in the past year. The “crying” and “shocked” emojis were also heavily used, indicating a mix of distress and disbelief.
What is notable is what did not happen: no high-level executive publicly defended the program beyond Bosworth’s terse statement. No internal FAQ or town hall was announced to address concerns. The message from leadership, employees told me, has been silence.
“They posted the memo, Bosworth said ‘no opt-out,’ and then everyone just… stopped talking,” one employee said. “It was like they dropped a bomb and walked away. We’re all just sitting here wondering if we’re going to be fired for unplugging the software.”
Part IV: The Technical Rationale – Why Meta Says It’s Necessary
To understand why Meta is taking such a risky step – both legally and culturally – one must understand the problem the company is trying to solve. AI models have become extraordinarily good at certain tasks: writing code, summarizing text, answering questions. But they remain remarkably bad at basic computer use.
A state-of-the-art model like GPT-5 can write a Python script to rename a hundred files. But ask the same model to actually open a file explorer, click on the first file, press F2, type the new name, and hit enter – a sequence a human does in seconds – and the model fails miserably. It does not understand mouse coordinates. It does not know that dropdown menus exist. It has never seen a dialog box.
This problem is known in the field as GUI grounding – the ability to map natural language instructions to graphical user interface actions. Solving it requires massive amounts of demonstration data: recordings of humans actually using software, with timestamps, click locations, and screen states.
“Text is cheap. Mouse movements are expensive,” said Dr. Alexander Petrov, a researcher in human-computer interaction who has consulted for several AI companies. “To train a model to use a computer like a human, you need millions of examples of humans using computers. The easiest source of that data? Your own employees. They’re already there. They’re already using the apps. You just need to record them.”
Meta is not alone in this approach. Microsoft has reportedly considered similar programs for internal AI training. Google has run opt-in studies. But no major tech company has attempted a mandatory, no-opt-out, screen-recording keystroke logger across thousands of employees – especially in the context of layoffs.
“The difference is consent,” Petrov added. “Other companies ask. Meta is telling. And that changes everything.”
The internal memo frames the MCI as a natural extension of Meta’s “all-in on AI” strategy. The company has formed a Meta Superintelligence Labs unit, launched AI Weeks, reorganized staff into “AI pods,” and released several models under the “Muse” brand. The MCI is presented as another lever: “All Meta employees can help our models get better simply by doing their daily work.”
But for employees, that framing feels like a rhetorical sleight of hand. They are not “helping.” They are being surveilled. And they have no choice.
Part V: The Legal Landscape – Is This Even Allowed?
The legality of the MCI program is a patchwork of jurisdictional gray areas. In the United States, there is no federal law that explicitly prohibits an employer from monitoring employee activity on company-provided equipment – as long as the employer has disclosed the monitoring. Meta’s employee handbook, like those of most large tech companies, includes language stating that “work devices are company property and may be monitored at any time.”
However, the MCI program pushes into novel territory. Keystroke logging, continuous screenshots, and AI training are not the same as IT security monitoring. And several state laws may impose additional requirements.
California, where Meta is headquartered, has some of the strongest privacy laws in the country. The California Consumer Privacy Act (CCPA) gives employees – as “consumers” in the employment context – certain rights over their personal information. But the CCPA has broad exceptions for data collected “solely for employment purposes.” Meta will likely argue that AI training is an employment purpose.
New York and Illinois have laws requiring employee consent for certain types of biometric or electronic monitoring. Illinois’ Biometric Information Privacy Act (BIPA) is particularly strict, requiring informed written consent before collecting “biometric identifiers” – which could arguably include keystroke dynamics or mouse movement patterns. Meta has not publicly addressed whether it has obtained such consent.
Washington state, where many Meta employees work remotely, has a law requiring employers to provide “reasonable notice” of electronic monitoring. The MCI pop-up notification likely satisfies that requirement, but the lack of an opt-out may still be challenged.
“The legal argument will come down to whether AI training is a ‘legitimate business purpose,’” said Torres, the privacy lawyer. “Courts have generally given employers wide latitude to monitor productivity and security. But using employee keystrokes to train a commercial AI product is not obviously the same as ensuring people aren't watching porn at work. This is a different category, and the law hasn’t caught up.”
No lawsuits have been filed yet. But several employment attorneys have told Business Insider they are actively recruiting plaintiffs. A class-action challenge is likely within weeks.
Part VI: The Broader Trend – Labor as Training Data
The MCI program is not an isolated incident. It is part of a larger, uncomfortable trend: the use of human labor – often without full consent – as training data for AI systems.
Amazon has been criticized for using warehouse workers’ movements to train robotic pickers, with employees reporting pressure to move faster to generate “better” data.
OpenAI has faced questions about whether it used customer chat logs to train models, leading to policy changes and additional disclosures.
Google reportedly considered – and then backed away from – a program to record employee meetings for AI training, after legal and ethical reviews.
Microsoft has an opt-in program for telemetry data, but employees can decline.
What distinguishes Meta’s MCI is the combination of mandatory participation, continuous recording, sensitive application scope (email and chat), and layoff timing. It is, in the words of one employee, “a perfect storm of bad decisions.”
“The robotics analogy is instructive,” said Klein, the organizational psychologist. “In robotics labs, the human demonstrators are contractors. They sign consent forms. They’re paid separately. They know exactly what’s being recorded and why. They are not also expected to do their regular jobs, meet performance metrics, and worry about layoffs. Meta has collapsed those boundaries entirely.”
The result, Klein argues, is a workplace environment where employees feel like subjects, not participants. “Trust is the currency of high-functioning teams. Meta is spending that currency on AI training. The question is whether they’ll have any left when this is over.”
Part VII: Employee Responses – Shadow Compliance and Quiet Resistance
Faced with a mandatory program and no opt-out, Meta employees are finding creative ways to resist without losing their jobs.
Some have begun using their work laptops for the absolute minimum required – checking email, attending meetings, avoiding any extended use of VSCode or Metamate. “If they want to train AI on my workflows, they’re going to get a very boring dataset,” one engineer told me. “I’m doing everything on my personal machine and just copying the final results over. My work laptop is basically just a Zoom machine now.”
Others have attempted to “poison” the data – intentionally performing useless or nonsensical actions to degrade the training signal. “Sometimes I just open VSCode and type random letters for a few minutes,” another employee admitted. “Let the AI learn that. See what happens.”
A third group has organized informal “data strikes” – coordinating with coworkers to all stop using Metamate, the internal AI assistant, simultaneously. “If they’re going to record us, we’re going to make sure the data is useless,” one organizer said. “The only power we have is the quality of our work. So we’re going to lower the quality until they give us an opt-out.”
Whether these tactics will have any impact is unclear. Meta has sophisticated data filtering and could likely detect and exclude anomalous patterns. But the very fact that employees are resorting to sabotage – however symbolic – speaks to the depth of the backlash.
Notably, no employees have publicly resigned over the MCI program. The tech job market, while still strong, is not as overheated as it was in 2021-2022. Leaving Meta without a next role lined up is a risk many are unwilling to take. So they stay. And they comply. And they resent it.
“That’s the trap,” said one long-time Meta employee. “They know we won’t quit over this. So they don’t have to care. We’re not people. We’re data sources with legs.”
Conclusion: The Dystopian Threshold
Meta’s Model Capability Initiative is not illegal. It may not even be, by the narrowest legal definitions, improper. Employees signed agreements acknowledging monitoring. Work devices are company property. AI training is a business purpose.
But legality is not morality. And the MCI program, in its mandatory scope, its layoff timing, and its unapologetic dismissal of employee discomfort, crosses a threshold that many in the tech industry have long feared: the normalization of employees as mere training data.
Robotics labs spent years recording humans doing physical tasks – grabbing, walking, stacking – to teach their systems how to move. Meta has just brought that playbook to software and computer use. But the demo subjects are its own staff. And the backdrop of layoffs gives it a very dystopian feel.
The question now is whether the backlash will force a retreat. Meta has not changed course yet. But employee anger is real, legal challenges are likely, and the reputational damage of being seen as the company that turned its workers into keystroke miners is non-trivial.
For the 8,000 employees leaving on May 20, the damage is already done. Their final month at Meta will be remembered not for farewell lunches or knowledge transfer sessions, but for the quiet hum of recording software, capturing every click, every keystroke, every moment of their remaining time.
They came to Meta to build the future. They leave having been used as fuel for it.
And that, more than any memo or emoji, is the real story.
Your one-stop shop for automation insights and news on artificial intelligence is EngineAi.
Did you like this article? Check out more of our knowledgeable resources:
Watch this space for weekly updates on digital transformation, process automation, and machine learning. Let us assist you in bringing the future into your company right now