Skip to main content
All Stories By:

Emilia David

Emilia David


Emilia David is a journalist focused on writing about technology and the economy. She’s written on topics around AI, financial technology, the capital markets, consumer technology, commodities, energy, trade policies, labor, and politics. Her work has appeared on Insider, Venture Capital Journal, WatersTechnology, American Metal Market,, DNAinfo, and BusinessWorld.

External Link
“Why are we expected to do the coding Olympics for every company that wants to interview you?”

Wired writes about how tech job interviews have gotten even more demanding after the series of layoffs that rocked the industry these past few years:

Emails reviewed by Wired showed that in one interview for an engineering role at Netflix, a technical recruiter requested that a job candidate submit a three-page project evaluation within 48 hours—all before the first round of interviews.

A Netflix spokesperson said the process is different for each role and otherwise declined to comment.

A similar email at Snap outlined a six-part interview process for a potential engineering candidate, with each part lasting an hour. A company spokesperson says its interview process hasn’t changed as a result of labor market changes.

Microsoft says its automated AI red teaming tool finds malicious content “in a matter of hours.”

PyRIT, or Python Risk Identification Toolkit, can point human evaluators to “hot spot” categories in AI that might generate harmful prompt results.

Microsoft used PyRIT while redteaming (the process of intentionally trying to get AI systems to go against safety protocols) its Copilot services to write thousands of malicious prompts and score the response based on potential harm in categories that security teams can now focus on.

My Android bot wears ski goggles (or Vision Pros), has overalls, is purple, and is uselessly cute.

Google’s Android mascot, “The Bot,” debuted during CES in December, and now the company lets people customize their own Bots for fun. 9to5Google points out dressing up The Bot harkens back to Androidify, the avatar maker released in 2011 but discontinued in 2020.

I made a purple Android Bot with ski goggles because I’m a sucker for character creators, but what exactly I’ll use it for, as an iPhone owner, I don’t know.

customized purple Android The Bot mascot
My custom Android Bot is stylin’
External Link
Stability AI changes it up for Stable Diffusion 3.

VentureBeat reports that the next generation of Stability AI’s flagship AI image generation model will use a diffusion transformer framework similar to OpenAI’s Sora. Its current models rely on diffusion architecture alone.

The company said Stable Diffusion 3 — currently in previews — should be better at spelling (look closely at text in AI-generated images, and you know they don’t look right) and boost image quality.

External Link
“Generative AI has hit the tipping point.”

As Nvidia reports its Q4 2023 earnings, CEO Jensen Huang says:

Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations

That demand, and Nvidia’s dominance in AI chips, powered the company’s record earnings of $60.9 billion for the full year 2023 — a 126 percent increase from last year. Its Q4 revenue of $22.1 billion is an increase of an astounding 265 percent over the same period last year.

External Link
Nvidia lets Google’s Gemma AI model loose on its GPUs.

The open-source Gemma models are optimized for “the installed base of over 100 million Nvidia RTX GPUs,” installed in PCs around the world, in addition to Nvidia’s ubiquitous AI chips like the H100.

The models will also be part of Nvidia’s Chat with RTX demo, which lets AI models run locally and access users’ files to generate answers to prompts.

External Link
Calls for regulating AI deepfakes are growing.

An open letter signed by AI researchers, including Algorithmic Justice League founder Joy Boulamwini and former presidential candidate Andrew Yang, said governments should look to fully criminalize deepfake child pornography even with fictional children, create criminal penalities for people who make and share “harmful” deepfakes, and require developers to be held liable if their safety measures are easily bypassed.

US policymakers have discussed regulating deepfakes, though mostly in the context of the upcoming elections. It’s rare for open letters to influence regulation, but AI is a fraught issue that some lawmakers might take these suggestions into account.