Technology – Ventured https://ourblog.siliconbaypartners.com Tech, Business, and Real Estate News Mon, 16 Feb 2026 13:43:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://i0.wp.com/ourblog.siliconbaypartners.com/wp-content/uploads/2017/08/SBP-Logo-Single.png?fit=32%2C28&ssl=1 Technology – Ventured https://ourblog.siliconbaypartners.com 32 32 This Simple Site Makes It Easy To Track ICE’s Actions https://ourblog.siliconbaypartners.com/this-simple-site-makes-it-easy-to-track-ices-actions/?utm_source=rss&utm_medium=rss&utm_campaign=this-simple-site-makes-it-easy-to-track-ices-actions https://ourblog.siliconbaypartners.com/this-simple-site-makes-it-easy-to-track-ices-actions/#respond Mon, 16 Feb 2026 13:43:11 +0000 https://ourblog.siliconbaypartners.com/?p=64169 Ice WorkersSource: Fast Company, Grace Snelling Photo: Alex Kormann/The Minnesota Star Tribune/Getty Images Icetracking.org is a new database that offers a macro look at the Trump administration’s immigration crackdown. As the Trump administration’s crackdown on immigration continues, keeping up with Immigrations and Custom Enforcement can feel like navigating a maze. From stories of agents raiding worksites […]]]> Ice Workers

Source: Fast Company, Grace Snelling
Photo: Alex Kormann/The Minnesota Star Tribune/Getty Images

Icetracking.org is a new database that offers a macro look at the Trump administration’s immigration crackdown.

As the Trump administration’s crackdown on immigration continues, keeping up with Immigrations and Custom Enforcement can feel like navigating a maze. From stories of agents raiding worksites and taking children in broad daylight to reported plans for new detention centers, the daily onslaught of alarming news makes it difficult to see the full picture of ICE’s actions at any given moment.

Data journalist Michael Sparks is working on a solution. Sparks is a cartographer and coding editor at the Outlaw Ocean Project, a nonprofit journalism organization producing investigative stories about human rights, labor, and environmental concerns at sea. He’s applied skills from that role to create a new investigative database, “The Machinery of Mass Detention: A Record of What Has Been Lost,” designed as a centralized place to get updates on ICE’s movements.

The database, which is housed at icetracking.org, includes continuously updating sections that track statistics like the total number of people currently detained by the U.S., the percentage of people held in ICE facilities with no criminal record, and the number of people who have died in ICE custody in the past month and year. The information is presented in succinct sections with citations from major news outlets that are easily fact-checked.

Icetracking.org is a devastating but necessary resource to keep the public informed on the state of the administration’s immigration crackdown from a macro perspective, rather than simply in constant bursts of new information.

How one data journalist is keeping track of ICE

In his day job at the Outlaw Ocean Project, Sparks uses tens of thousands of government documents, news articles, and social media posts to build databases of environmental and human rights abuses at sea. Before that, he served as a product developer at The New York Times for four years, where he honed his data storytelling skills. Sparks says he felt compelled to use his skillset to hold power to account after Minneapolis residents Renee Good and Alex Pretti were killed by ICE officers in January.

“I knew there was another vast amount of cruelty happening all over the country, and wanted people to realize that,” he says.

Sparks took a little less than three weeks from starting the site to debuting it this week. It’s essentially a database composed of aggregated reports and stories from national outlets like The New York Times, The Washington Post, and CBS News, as well as local sources like Tulsa World, Houston Chronicle, and The Minnesota Star Tribune.

The tracker’s code is programmed to send Sparks a list of relevant articles from these trusted sites every 48 hours, which he then manually approves or rejects, writes up a summary, and uses to update the site.

In a memo at the bottom of the site, Sparks emphasizes the human toll behind the database: “What the numbers cannot capture is the texture of individual lives disrupted—the five-year-old taken from his walk home from school, the nurse shot dead while filming a protest, the grandmother detained at a routine government appointment. These cases, documented in the sections that follow, are not abstractions. They are the human particulars of a policy that has reshaped the landscape of American civil liberties.”

“I want people to feel emotion and be motivated to act”

Icetracking.org’s true impact rests in the way it displays information. Sparks says he pulled inspiration from The New Yorker’s UX for his design, opting for a simple color palette of white and black with pops of red for the most important information, and organizing the whole page into clear sections.

When people first open icetracking.org, they see a succinct layout of seven key statistics, including the total number of people currently detained by ICE (around 73,000); the percentage of those being held with no criminal convictions (73.6%); and the number of people who died in ICE custody in 2025 (32, with 2026 expected to be even worse). Sparks says he updates these statistics any time one of his trusted sources publishes a new estimate.

Users can then navigate to a header bar, organized by sections, for more information on each of the categories. Each subcategory similarly opens with a layout of the most significant statistics, followed by links to top articles. For one section, titled “Corporate Network: Who Profits From ICE,” Sparks created a color-coded chart to track the kinds of companies that have provided funding or support to ICE, as well as the scale of their contributions. These include the detention facility contractor GEO Group, the AI technologies company Palantir, and the tactical communications service CACI International.

“The corporate network felt super important,” Sparks says. “These are detention ‘networks.’ Donald Trump and Stephen Miller are not doing this themselves. This section deserves a lot more reporting that, in an ideal world, I could do.”

So far, Sparks says, the reaction to the tool has been a mix of gratitude and horror at seeing this information presented in one place. “To be honest, that’s the kind of response I’m looking for,” he says. “I want people to feel emotion and be motivated to act.”

The preferred-rate deadline for Fast Company’s Best Workplaces for Innovators Awards is Friday, February 20, at 11:59 p.m. PT. Apply today.

ABOUT THE AUTHOR

Grace Snelling is an editorial assistant for Fast Company with a focus on product design, branding, art, and all things Gen Z. Her stories have included an exploration into the wacky world of Duolingo’s famous mascot, an interview with the New Yorker’s art editor about the scramble to prepare a cover image of Donald Trump post-2024 election, and an analysis of how the pineapple became the ultimate sex symbol.

https://www.fastcompany.com/91488032/this-simple-site-makes-it-easy-to-track-ices-impact

]]>
https://ourblog.siliconbaypartners.com/this-simple-site-makes-it-easy-to-track-ices-actions/feed/ 0
Companies Replaced Entry-level Workers With AI. Now They Are Paying The Price https://ourblog.siliconbaypartners.com/companies-replaced-entry-level-workers-with-ai-now-they-are-paying-the-price/?utm_source=rss&utm_medium=rss&utm_campaign=companies-replaced-entry-level-workers-with-ai-now-they-are-paying-the-price https://ourblog.siliconbaypartners.com/companies-replaced-entry-level-workers-with-ai-now-they-are-paying-the-price/#respond Sat, 07 Feb 2026 12:47:14 +0000 https://ourblog.siliconbaypartners.com/?p=64164 AISource: Fast Company, Megan Carnegie Photo: Freepik Recent graduates are clearly not okay—but neither are the companies that decided they could do without them. Isaac, 33, has been a mid-level software development engineer at a Big Tech firm for four years, and noticed entry-level job postings dropping at his workplace at the start of 2025. […]]]> AI

Source: Fast Company, Megan Carnegie
Photo: Freepik

Recent graduates are clearly not okay—but neither are the companies that decided they could do without them.

Isaac, 33, has been a mid-level software development engineer at a Big Tech firm for four years, and noticed entry-level job postings dropping at his workplace at the start of 2025. The work, however, didn’t vanish with them. Tasks once handled by junior engineers—like writing and testing code, fixing bugs, and contributing to development projects—were absorbed by senior staff, often with the assumption that AI would make up the difference.

And while AI has sped up the velocity of shipping code and features, there are fewer people to do tasks like designing, testing, and working with stakeholders, which AI has zero grasp on. The cracks have been hard to ignore. “Seniors are burning out, and when they leave, there’s no rush to replace them, because ‘the AI will do it’!” Isaac says. Worried that he’ll become the next strung-out senior, he’s looking for his exit, ideally at a smaller tech firm. (Isaac spoke to Fast Company under a pseudonym to avoid possible retaliation.)

The shift is striking, given how recently corporate America was courting Gen Z with fanatic fervor. Organizations raced to prove they understood younger employees. They flooded LinkedIn with thought leadership on the multigenerational workplace of the future, and retooled benefits programs to include wellness stipends and mental health days. Reverse mentorship programs, through which younger employees share knowledge and perspectives with more senior colleagues—touted by companies like Target, Accenture, and PwC—promised to give junior employees a voice in shaping culture and strategy. Some firms even brought Gen Z voices into the boardroom.

Yet now, in the case of firms like Isaac’s, entry-level workers, once heralded as essential to innovation and growth, are struggling to get a toe—let alone a foot—in the door. Internships, starter jobs, and junior roles, the indispensable on-ramps to white-collar careers, have been evaporating for several years due to cost pressures and post-pandemic belt-tightening. Since 2023, entry-level job postings in the U.S. have sunk 35%, according to labor research firm Revelio Labs.

The advent of AI is accelerating the entry-level apocalypse. Two-fifths of global leaders revealed that entry-level roles have already been reduced or cut due to efficiencies made by AI conducting research, admin, and briefing tasks, and 43% expect this to happen in the next year.

“While there’s steady hiring or even growth in the skilled trades, we’re seeing entry-level vacancies fall significantly in tech and customer service and sales roles,” says Mona Mourshed, founder of the workplace development nonprofit Generation. “Being in the business of training and placing people into entry-level roles, we find it deeply concerning.” Graduates are clearly not okay—but neither are the companies that decided they could do without them.

AI at work: the supercar with no driver

The logic was seductive in its simplicity. Cut costs, move faster, shrink training budgets, let AI and a leaner workforce handle the rest. In reality, it’s producing something else entirely: flattened teams with little agency, endless cycles of rework, and exhausted senior employees juggling all task levels at once.

One redditor who posted about how their company has stopped hiring entry-level engineers, received hundreds of other responses as others chiming in with similar stories. One commenter noted: “Not sure what the plan will be after the knowledge transfer is over.”

Isaac has watched this dynamic unfold firsthand. Leaders at his company see AI as a force multiplier, and are fixated on shipping features quickly. Isaac can see their point: “[AI] can straight up write better, faster, more legible code than most developers,” he admits. However, he points out, “any seasoned engineer knows the hard part isn’t writing the code, it’s the design and testing.” Yet, there’s far fewer people to delegate this work to, so senior developers are left to do this on their own.

Compounding the problem is the fact that AI doesn’t understand the problem it’s meant to solve. Left unchecked, it can go rogue. Isaac recalls multiple instances of chatbots deleting production stacks—unprompted—because they couldn’t figure out how to solve an issue. “Without an expert who knows how to prompt and guide it, AI is just a supercar with no driver,” he says. The team has seen their workload steadily increase in line with automation, so the time savings it creates have had little impact. Many seniors have checked out, with several burned out engineers signed off for medical leave.

Research from the project management platform Asana underscores this growing “efficiency illusion.” While 77% of workers are already using AI agents and expect to hand more off to them in the next year, nearly two-thirds say the tools are unreliable, and more than half say agents confidently produce incorrect or misleading information. The result is time down the drain: a U.S. study found that employees are spending an extra 4.5 hours a week fixing AI workslop.

“AI can make work look faster on the surface, but it can also create a lot of cleanup work—double-checking outputs, correcting errors, and redoing steps that were based on faulty information,” Mark Hoffman, Asana’s Work Innovation Lead, tells Fast Company. When something goes wrong, accountability is murky, he adds, and the responsibility often falls back on the employee to catch errors, explain outcomes, and manage the risk. It’s driving up already record-high levels of burnout; 77% of knowledge workers say their workloads are unmanageable, and 84% are digitally exhausted.

When errors slip through, the consequences are costly and embarrassing. Three-quarters of Americans report at least one negative consequence from poor AI outputs, including work rejected by stakeholders (28%), security incidents (27%), and customer complaints (25%). In October, Deloitte was forced to refund the Australian Department of Employment and Workplace Relations after a report was found to contain AI hallucinations and workslop. In the past, newbie consultants would have handled tasks such as this. However, notably, Deloitte cut its graduate cohort by 18% and slashed hundreds of early-career roles earlier that summer.

The demographic time bomb

Not only are workloads increasing, by hollowing out their junior ranks, businesses are putting themselves squarely in the path of a slow-burning demographic time bomb as seniors begin to retire in record numbers.

From 2024 to 2032, 18.4 million experienced workers age 55 to 64 with postsecondary education are expected to retire, but only 13.8 million younger workers (currently age 16 to 24) are entering with equivalent qualifications. Even in an AI-powered economy, where certain jobs will be automated, companies still need humans with judgment-, context-, institutional-, and sector-specific insight.

Yet plenty are making moves—at least for today—to wipe out the training ground that turns beginners into experts.

“There won’t be an endless supply of experienced hires to fall back on, so everyone will be fighting for the limited, increasingly expensive talent with domain expertise,” says Cali Williams Yost, futurist and founder of flexible-work consulting firm Flex+Strategy Group. “Companies have maybe five years to train younger workers to take over and gain the niche knowledge, so AI has something to augment.”

Moe Hutt, an entry-level recruitment marketing expert and director of consulting at recruitment marketing agency HireClix, has watched clients scale back or abandon entry-level hiring, citing AI-aided workflows and economic uncertainty. Hutt points to the less visible fallout within organizations beyond damaging the talent pipeline. “It’s human nature to want to help,” she says. “When there’s no release valve of training juniors, it creates friction everywhere.”

For middle and senior management, delegating, teaching, and watching someone grow is a reward for the experience. Research consistently shows that sharing knowledge and mentoring improves motivation, boosts psychological well-being, and reduces burnout among experienced employees. With no one to train or teach, disengagement spreads, eroding a workforce where most people have already checked out.

Being AI-savvy and being prepared for the demographic cliff aren’t mutually exclusive. Organizations can build pro-worker environments where employees are augmented with AI, without hollowing out their future talent pipelines. PwC—admittedly, another firm which has been open about its cuts to entry-level recruiting, at least in the U.K.—is experimenting with what that balance could look like by training junior accountants to become managers of AI. Entry-level employees gain early exposure to leadership and accountability, while the firm builds a cache of managers that are fluent in both human judgment and machine output. It’s proof that efficiency and succession planning can coexist.

This matters because disappearing entry-level jobs aren’t just a problem for the corporate workforce—it will be a societal crisis, too. A functioning society depends on younger generations steadily taking over from older ones.

AI might be able to write the code, but without people trained to guide it, question it, and eventually replace their elders, there will be no one left to keep the lights on.

ABOUT THE AUTHOR

Megan Carnegie is a London-based freelance journalist who specialises in writing features about the world of technology, work, and business for publications like WIRED, Business Insider, Digital Frontier and BBC. Her work is underpinned by a desire to investigate what’s not working in the working world, and how more equitable conditions can be secured for workers—whatever their industry.

https://www.fastcompany.com/91483431/companies-replaced-entry-level-workers-with-ai

]]>
https://ourblog.siliconbaypartners.com/companies-replaced-entry-level-workers-with-ai-now-they-are-paying-the-price/feed/ 0
Stop Letting AI Run Your Social Life https://ourblog.siliconbaypartners.com/stop-letting-ai-run-your-social-life/?utm_source=rss&utm_medium=rss&utm_campaign=stop-letting-ai-run-your-social-life https://ourblog.siliconbaypartners.com/stop-letting-ai-run-your-social-life/#respond Fri, 06 Feb 2026 04:10:08 +0000 https://ourblog.siliconbaypartners.com/?p=64145 AISource: Time, Angela Haupt Photo: Vertigo3d—Getty Images Haupt is a health and wellness editor at TIME. AI might not have taken your job yet—but it’s already writing your breakup text. What began as a productivity tool has quietly become a social one, and people increasingly consult it for their most personal moments: drafting apologies, translating […]]]> AI

Source: Time, Angela Haupt
Photo: Vertigo3d—Getty Images

Haupt is a health and wellness editor at TIME.

AI might not have taken your job yet—but it’s already writing your breakup text.

What began as a productivity tool has quietly become a social one, and people increasingly consult it for their most personal moments: drafting apologies, translating passive-aggressive texts, and, yes, deciding how to end relationships.

“I wholeheartedly believe that AI is shifting the relational bedrock of society,” says Rachel Wood, a cyberpsychology expert and founder of the AI Mental Health Collective. “People really are using it to run their social life: Instead of the conversations we used to have—with neighbors or at clubs or in our hobbies or our faith communities—those conversations are being rerouted into chatbots.”

As an entire generation grows up outsourcing social decisions to large language models (LLMs) like ChatGPT, Claude, and Gemini, Wood worries about the implications of turning the emotional work of connection over to a machine. What that means—for how people communicate, argue, date, and make sense of one another—is only beginning to come into focus.

When AI becomes your social copilot

It often starts as a second opinion. A quick paste of a text message into an AI chatbot. A question typed casually: “What do you think they meant by this?”

“People will use it to break down a blow-by-blow account of an argument they had with someone,” Wood says, or to decode ambiguous messages. “Maybe they’re just starting to date, and they put it in there and say, ‘My boyfriend just texted me this. What does it really mean?’” They might also ask: Does the LLM think the person they’re corresponding with is a narcissist? Does he seem checked out? Does she have a pattern of guilt-tripping or shifting blame?

Some users are turning to AI as a social rehearsal space, says Dr. Nina Vasan, a clinical assistant professor of psychiatry at Stanford University and the founder and director of Brainstorm: The Stanford Lab for Mental Health Innovation. People gravitate to these tools because they’re “trying to get the words right before they risk the relationship,” she says. That might mean asking their LLM of choice to draft texts to friends, edit emails to their boss, help them figure out what questions to ask on a first date, or navigate tricky group-chat dynamics.

Vasan has also seen people use AI tools to craft dating-app profiles, respond to passive-aggressive family members, and set boundaries they’ve never before been able to articulate. “Some use it to rehearse difficult conversations before having them,” she says. “Others process social interactions afterward, essentially asking AI, ‘Did I handle that OK?’” ChatGPT and other LLMs, she says, have become a third party in many of our most intimate conversations.

Meet the new relationship referee

Consulting AI isn’t always a welcome development. Some young people, in particular, now use LLMs to generate “receipts,” deploying AI-backed answers as proof that they’re right.

“They use AI to try to create these airtight arguments where they can analyze a friend’s statements or a boyfriend’s statements, or they especially like to use it with their parents,” says Jimmie Manning, a professor of communication studies at the University of Nevada, where he’s also the director of the Relational Communication Research Laboratory. (None of his students have presented him with an AI-generated receipt yet, but it’s probably only a matter of time, he muses.) A teen might copy and paste a text from her mom into ChatGPT, for example, and ask if her parents are being unreasonably strict—and then present them with the evidence that yes, in fact, they are.

“They’re trying to get affirmation from AI, and you can guess how AI responds to them, because it’s here for you,” Manning says.

Using LLMs in this way turns relationships into adversarial negotiations, he adds. When people turn to AI for validation, they’re usually not considering their friend or romantic partner or parent’s perspective. Plus, shoving “receipts” in someone’s face can feel like an ambush. Those on the receiving end typically don’t respond well. “People are still wary of the algorithm entering their intimate lives,” Manning says. “There’s this authenticity question that we’re going to face as a culture.” When he asks his students how their friends or partners responded, they usually say: “Oh, he came up with excuses,” or “She just rolled her eyes.”

“It’s not really helping,” he says. “It’s just going to escalate the situation without any kind of resolution.”

What’s at stake

Outsourcing social tasks to AI is “deeply understandable,” Vasan says, “and deeply consequential.” It can support healthier communication, but it can also short-circuit emotional growth. On the more helpful side of things, she’s seen people with social anxiety finally ask someone on a date because Gemini helped them draft the message. Other times, people use it in the middle of an argument—not to prove they’re right, but to consider how the other person might be feeling, and to figure out how to say something in a way that will actually land.

“Instead of escalating into a fight or shutting down entirely, they’re using AI to step back and ask: ‘What’s really going on here? What does my partner need to hear? How can I express this without being hurtful?’” she says. In those cases, “It’s helping people break out of destructive communication patterns and build healthier dynamics with the people they love most.”

Yet that doesn’t account for the many potentially harmful ways people are using LLMs. “I see people who’ve become so dependent on AI-generated responses that they describe feeling like strangers in their own relationships,” Vasan says. “AI in our social lives is an amplifier: It can deepen connection, or it can hollow it out.” The same tool that helps someone communicate more thoughtfully, she says, can also help them avoid being emotionally present.

Plus, when you regularly rely on a chatbot as an arbiter or conversational crutch, it’s possible you’ll erode important skills like patience, listening, and compromise. People who use AI intensely or in a prolonged manner may find that the tool skews their social expectations, because they begin expecting immediate replies and 24/7 availability. “You have something that’s always going to answer you,” Wood says. “The chatbot is never going to cancel on you for going out to dinner. It’s never going to really push back on you, so that friction is gone.” Of course, friction is inevitable in even the healthiest relationships, so when people become used to the alternative, they can lose patience over the slightest inconvenience.

Then there’s the back-and-forth engagement that makes relationships work. If you grab lunch with a friend, you’ll probably take turns sharing stories and talking about your own lives. “However, the chatbot is never going to be, like, ‘Hey, hang on, Rachel, can I talk about me for a while?’” Wood says. “You don’t have to practice listening skills—that reciprocity is missing.” That imbalance can subtly recalibrate what people expect from real conversations.

Plus, every relationship requires compromise. When you spend too much time with a bot, that skill begins to atrophy, Wood says, because the interaction is entirely on the user’s terms. “The chatbot is never going to ask you to compromise, because it’s never going to say no to you,” she adds. “And life is full of no’s.”

The illusion of a second opinion

Researchers don’t yet have hard data that provides a sense of how outsourcing social tasks to AI affects relationship quality or overall well-being. “We as a field don’t have the science for it, but that doesn’t mean there’s nothing going on. It just means we haven’t measured it yet,” says Dr. Karthik V. Sarma, a health AI scientist and physician at the University of California, San Francisco, where he founded the AI in Mental Health Research Group. “In the absence of that, the old advice remains good for almost any use of almost anything: moderation and patterns are key.”

Greater AI literacy is essential, too, Sarma says. Many people use LLMs without understanding exactly how and why they respond in certain ways. Say, for example, you’re planning to propose to your partner, but you want to check-in with people close to you first to confirm it’s the right move. Your best friend’s opinion will be valuable, Sarma says. But if you ask the bot? Don’t put too much weight on its words. “The chatbot doesn’t have its own positionality at all,” Sarma says. “Because of the way technology works, it’s actually much more likely to become more of a reflection of your own positionality. Once you’ve molded it enough, of course it’s going to agree with you, because it’s kind of like another version of you. It’s more of a mirror.”

Looking ahead

When Pat Pataranutaporn thinks about the effects of long-term AI usage, his main question is this: Is it limiting our ability to express ourselves? Or does it help people express themselves better? As founding director of the cyborg psychology research group and co-director of MIT Media Lab’s Advancing Humans with AI research program, Pataranutaporn is interested in ways that people can use AI to promote human flourishing, pro-social interaction, and human-to-human interaction.

The goal is to use this technology to “help people be better, gain more agency, and feel that they’re in control of their lives,” he says, “rather than having technology constrain them like social media or previous technologies.”

In part, that means using AI to gain the skills or confidence to talk to people face-to-face, rather than allowing the tool to replace human relationships. You can also use LLMs to help finesse your ideas and take them to the next level, as opposed to substitutes for original thought. “The idea or intent needs to be very clear and strong at the beginning,” Pataranutaporn says. “And then maybe AI could help augment or enhance it.” Before asking ChatGPT to compose a Valentine’s Day love letter, he suggests asking yourself: What is your unique perspective that AI can help bring to fruition?

Of course, individual users are at the mercy of a bigger force: the companies that develop these tools. Exactly how people use AI tools, and whether they bolster or weaken relationships, hinges on tech companies making their platforms healthier, Vasan says. That means intentionally designing tools to strengthen human capacity, rather than quietly replacing it.

“We shouldn’t design AI to perform relationships for us—we should design it to strengthen our ability to have them,” she says. “The key question isn’t whether AI is involved. It’s whether it’s helping you show up more human or letting you hide. We’re running a massive uncontrolled experiment on human intimacy, and my concern isn’t that AI will make our messages better. It’s that we’ll forget what our own voice sounds like.”

https://time.com/7357217/ai-social-life-texting-chat-gpt-clause-gemini

]]>
https://ourblog.siliconbaypartners.com/stop-letting-ai-run-your-social-life/feed/ 0
A New App Can Match Footprints To The Dinosaurs That Made Them https://ourblog.siliconbaypartners.com/a-new-app-can-match-footprints-to-the-dinosaurs-that-made-them/?utm_source=rss&utm_medium=rss&utm_campaign=a-new-app-can-match-footprints-to-the-dinosaurs-that-made-them https://ourblog.siliconbaypartners.com/a-new-app-can-match-footprints-to-the-dinosaurs-that-made-them/#respond Fri, 06 Feb 2026 03:55:35 +0000 https://ourblog.siliconbaypartners.com/?p=64142 DinoTrackerSource: Smithsonian Magazine, Mary Randolph Photo: DinoTracker was trained on almost 2,000 dinosaur fossils to classify new tracks. It is especially helpful for three-toed dinosaurs, as so many tracks fall under this umbrella, co-author Paige dePolo writes in The Conversation. (James St. John/Wikimedia Commons) Using artificial intelligence, DinoTracker can accurately classify dinosaur tracks around 90 […]]]> DinoTracker

Source: Smithsonian Magazine, Mary Randolph
Photo: DinoTracker was trained on almost 2,000 dinosaur fossils to classify new tracks. It is especially helpful for three-toed dinosaurs, as so many tracks fall under this umbrella, co-author Paige dePolo writes in The Conversation. (James St. John/Wikimedia Commons)

Using artificial intelligence, DinoTracker can accurately classify dinosaur tracks around 90 percent of the time

New inventions can come from unexpected places. After reading a book on the history of dinosaurs to his son, physicist Gregor Hartmann realized that the artificial intelligence methods he worked on in photon science might have an application in paleontology. He reached out to the book’s author, paleontologist Steve Brusatte, with the idea, and DinoTracker was born. The new model uses A.I. to classify fossilized tracks to different dinosaur species.

Hartmann’s team published the model’s findings in the Proceedings of the National Academy of Sciences last week, and the DinoTracker app is now available on GitHub.

Until now, most paleontologists have identified dinosaur tracks in fossils manually or based on previous classifications, which can sometimes lead to biases or misclassifications, Hartmann tells the Guardian’s Nicola Davis.

“You never find a footprint and alongside [it] the dinosaur that had made this footprint,” Hartmann says. “So, no offense to palaeontologists and such, but most likely some of these labels are wrong.”

Unlike previous models, DinoTracker was trained on almost 2,000 unlabeled dinosaur footprints, which it sorted into classifications based on eight key characteristics. These included spread of toes, amount of ground contact and heel position. After over a year of training the neural network, the model aligned with human classifications around 90 percent of the time.

“I wish there was an app like this when I first started studying dinosaur tracks,” Brusatte, who co-authored the report in PNAS, says to Rachael Funnell at IFL Science. “It really is challenging to understand the variation among tracks that were made by different dinosaurs and preserved in different environments, and this app now makes everything more objective.”

Because the model’s only input is the track itself, human experts still oversee and confirm the classification based on additional context of age and location.

“This app certainly isn’t the end of the story when it comes to puzzling over the mysteries of dinosaur footprints,” Paige dePolo, the report’s co-author, writes in the Conversation. “It’s a useful research resource for figuring out what tracks any footprint is most similar to in terms of shape, and what features are driving that similarity.”

DinoTracker also supported paleontologists’ previous observations that some three-toed dinosaur tracks are remarkably similar to bird tracks. This confirmation from an unbiased, computational source is significant, Brusatte tells IFLScience

“Our dinosaur footprint A.I. model shows that some of these mysterious, controversial three-toed Triassic tracks really do resemble those of birds,” he says. “The humans studying them were correct. It wasn’t just wishful thinking, or that they were seeing a bird shape in the tracks in the same way somebody imagines the face of Jesus on a slice of toast.”

This raises the hypothesis that birds or bird ancestors originated nearly 60 million years before scientists previously thought. But tracks are not a perfect representation of the creature that made them, dePolo writes in the Conversation.

Fun fact: Record-setting site
Bolivia’s Torotoro National Park is home to 16,600 exposed three-toed dinosaur footprints—the most ever found in one location.

“Dinosaur footprints are not perfect snapshots of the feet that made them,” she writes. “They reflect the shape of the foot, how the dinosaur was moving, and how soft or hard the ground was at the time.”

Jens Lallensack, a geoscientist at Humboldt University of Berlin, who has also used A.I. to help identify dinosaur tracks but was not involved in the study, tells the Guardian that the birdlike tracks do not directly reflect the shape of the foot and “are not evidence for an early appearance of birds.”

The scholars have made the model publicly available as an app on GitHub for both expert and amateur dinosaur lovers.

“I think A.I. has a bright future in paleontology,” Brusatte tells IFLScience. “It’s not that A.I. will become some all-knowing god that can identify every single fragment of dinosaur bone or tell us exactly where to find every fossil in the rocks. But what excites me most is that A.I. can become a new type of paleontologist, one that compiles and observes and filters through and classifies data and does so in a way that is free from the usual human biases.”

https://www.smithsonianmag.com/smart-news/new-app-can-match-footprints-to-the-dinosaurs-that-made-them

]]>
https://ourblog.siliconbaypartners.com/a-new-app-can-match-footprints-to-the-dinosaurs-that-made-them/feed/ 0
SpaceX Seeks Federal Approval To Launch 1 Million Solar-powered Satellite Data Centers https://ourblog.siliconbaypartners.com/spacex-seeks-federal-approval-to-launch-1-million-solar-powered-satellite-data-centers/?utm_source=rss&utm_medium=rss&utm_campaign=spacex-seeks-federal-approval-to-launch-1-million-solar-powered-satellite-data-centers https://ourblog.siliconbaypartners.com/spacex-seeks-federal-approval-to-launch-1-million-solar-powered-satellite-data-centers/#respond Fri, 06 Feb 2026 03:43:03 +0000 https://ourblog.siliconbaypartners.com/?p=64138 SpaceXSource: TechCrunch, Anthony Ha Photo: CHANDAN KHANNA/AFP/Getty Images SpaceX has filed a request with the Federal Communications Commission to launch a constellation of up to 1 million solar-powered satellites that it said will serve as data centers for artificial intelligence. The company’s filing lays out a grandiose vision, not just describing these planned satellites as […]]]> SpaceX

Source: TechCrunch, Anthony Ha
Photo: CHANDAN KHANNA/AFP/Getty Images

SpaceX has filed a request with the Federal Communications Commission to launch a constellation of up to 1 million solar-powered satellites that it said will serve as data centers for artificial intelligence.

The company’s filing lays out a grandiose vision, not just describing these planned satellites as “the most efficient way to meet the accelerating demand for AI computing power” but also framing them as “a first step towards becoming a Kardashev II-level civilization — one that can harness the Sun’s full power” while also “ensuring humanity’s multi-planetary future amongst the stars.”

The Verge argued that the 1 million satellite number is unlikely to be approved outright and is probably meant as a starting point for negotiations. The FCC recently gave SpaceX permission to launch an additional 7,500 Starlink satellites but said it would “defer authorization on the remaining 14,988” proposed satellites.

There are currently around 15,000 satellites orbiting Earth, according to the European Space Agency, and they’re already creating issues with pollution and debris.

The filing also comes as Amazon — citing a lack of rockets — is seeking an extension on an FCC deadline to have more than 1,600 satellites in orbit. Meanwhile, SpaceX is reportedly considering a merger with two of Elon Musk’s other companies, Tesla and xAI (which already merged with X), ahead of going public.

https://techcrunch.com/2026/01/31/spacex-seeks-federal-approval-to-launch-1-million-solar-powered-satellite-data-centers

]]>
https://ourblog.siliconbaypartners.com/spacex-seeks-federal-approval-to-launch-1-million-solar-powered-satellite-data-centers/feed/ 0
What Technology Takes From Us – And How To Take It Back https://ourblog.siliconbaypartners.com/what-technology-takes-from-us-and-how-to-take-it-back/?utm_source=rss&utm_medium=rss&utm_campaign=what-technology-takes-from-us-and-how-to-take-it-back https://ourblog.siliconbaypartners.com/what-technology-takes-from-us-and-how-to-take-it-back/#respond Sun, 01 Feb 2026 09:51:28 +0000 https://ourblog.siliconbaypartners.com/?p=64114 McDonald's NYCSource: The Guardian, Rebecca Solnit Photo: McDonald’s customers in New York order at a digital kiosk. (Robert K Chin/Alamy) Decisions outsourced, chatbots for friends, the natural world an afterthought: Silicon Valley is giving us life void of connection. There is a way out – but it’s going to take collective effort Gathering Summer after summer, […]]]> McDonald's NYC

Source: The Guardian, Rebecca Solnit
Photo: McDonald’s customers in New York order at a digital kiosk. (Robert K Chin/Alamy)

Decisions outsourced, chatbots for friends, the natural world an afterthought: Silicon Valley is giving us life void of connection. There is a way out – but it’s going to take collective effort

Gathering

Summer after summer, I used to descend into a creek that had carved a deep bed shaded by trees and lined with blackberry bushes whose long thorny canes arced down from the banks, dripping with sprays of fruit. Down in that creek, I’d spend hours picking until I had a few gallons of berries, until my hands and wrists were covered in scratches from the thorns and stained purple from the juice, until the tranquillity of that place had soaked into me.

The berries on a single spray might range from green through shades of red to the darkness that gives the fruit its name. Partly by sight and partly by touch, I determined which berries were too hard and which too soft, picking only the ones in between, while listening to birds and the hum of bees, to the music of water flowing, noticing small jewel-like insects among the berries, dragonflies in the open air, water striders in the creek’s calm stretches.

I went there for berries, but I also went there for the quiet, the calm, the feeling of cool water on my feet and sometimes up to my knees as I waded in where the picking was good. At home I made jars of jam. When I gave them away I was trying to give not just my jam – which was admittedly runny and seedy – but something of the peace of that creek, of summer itself.

I once read an essay in which a man tried to figure out how much per pound his garden tomatoes would cost if he factored in the price of all the materials and the hourly rate for his own labour. It was ridiculous and intentionally so, because growing tomatoes gives so much more than a certain number of pounds of fruit. There’s the exquisite smell of tomato leaves, and the sense of time that comes from watching a plant grow, observing pollinators visit, seeing a flower become a fruit, tracking its ripening. There is the pride of doing something yourself.

What the tomato-grower was pointing toward is what my friend, the environmental activist and author Chip Ward, long ago called “the tyranny of the quantifiable”. You grow tomatoes for the process, not just the product, to garden as well as to eat. To do as well as to have.

It doesn’t matter if you hate blackberries and tomatoes, gardening and wading; everyone has their own version of deep immersion in the moment, of engaging with the world in an embodied and sensual way, whether it’s dancing or dog-walking, cake-decorating or dirt-biking. What does matter is that we are beset with the ideology of maximising having while minimising doing. This has long been capitalism’s narrative and is now also technology’s. It is an ideology that steals from us relationships and connections and eventually our selves. I want to defend these things we are urged to abandon. This isn’t an essay about AI per se; it’s about what gets lost when we unthinkingly accept what AI offers us. It’s an attempt to describe and value just what it is that gets overlooked or devalued.

Connecting (and disconnecting)

Silicon Valley is full of tyrants of the quantifiable. For decades, its oligarchs have preached that our criteria for what we do and how we do it should be convenience, efficiency, productivity, profitability. They have told us that to go out into the world, to interact with others, is perilous, unpleasant, inefficient, a waste of time, and that time is something we should hoard rather than spend.

This ends up meaning that we can minimise our presence in the world and maximise time spent working and online, which also means maximising alienation and isolation. This has involved a reordering of society right down to our retail landscapes. Many things have become harder to do in person. Of course, there are well-recognised upsides, but the downsides are no less real: public spaces and public life have withered, including some of the places in which we once acquired our goods. All those errands – buying milk or socks (in the past, I would have said the newspaper) – meant moments of human contact, moving among strangers and making acquaintances, maybe observing the weather and the natural world. These activities meant becoming more familiar with your surroundings, feeling at home beyond the confines of what you rent or own.

All this, I believe, underpins democracy: ease with difference, familiarity with the lay of the land, a sense of connection and belonging, knowing where you are and who’s out there, relationships – however casual – to people beyond your immediate circle. To embrace the tyranny of the quantifiable is to dismiss the subtle value of these daily acts out in the world and the ways they generate and maintain networks of relationships.

So we have withdrawn, while being constantly told this is good, and it has turned out to be bad in a thousand small ways, weakening public life and local institutions, isolating us. Chronic withdrawal can lead to a yearning for contact, or simply a sense of loss at its absence. But it can also lead to something else: a growing inability to cope with that contact. It can transform a sense of something missing into aversion, or numbness, or unreal expectations about what human contact should be. The resilience to survive difficulty and discord, to brave the vagaries of unmediated human contact, must be maintained through practice. Silicon Valley-bred isolation robs us of that resilience.

While writing this, I dropped into a casual Indian restaurant I’ve been going to for years, only to find that, since my last visit, the system had changed so that you no longer say your order to a fellow human. Instead, you punch it in on a touchscreen even if someone is behind the counter. I helped the next customer, an old woman who just wanted a cup of chai, figure out the screens for her order. The process took us so much longer than saying “a cup of chai, please” and precluded any human contact with the servers, though at least she and I interacted with each other. The servers seemed miserable, their tasks more mechanised and less social than before. Here in San Francisco, which has been annexed by Silicon Valley, these screens for placing orders are now in more and more eating establishments that still offer face-to-face service. I wonder if people choose them over speaking to the cashier out of that aversion to contact that technology has inculcated in us.

A few days later, I wandered into a bookstore in a neighbourhood frequented mostly by young people, many in the tech industry, and asked the guy at the desk if they had Karen Hao’s Empire of AI. He pulled a used copy off the counter he had just priced, and then we bantered a bit. At the end he thanked me for interacting beyond the minimum. That was rare these days, he said. “People under 30 don’t make eye contact.”

Love letters minus the love

Having convinced a lot of us that we don’t want to go out and have unmediated contact with other people, Silicon Valley is now telling us we do not want to do our own thinking, creating or communicating with other humans. “You’ll never think alone again,” said one advertisement for an AI product called Cluely. The ad seemed confused about what thinking is and oblivious to why we might want to do it ourselves. These companies often suggest that things we have always done are too hard to do.

The price of giving up many activities is the atrophy of the ability to do them. The sociologist and psychologist Sherry Turkle, who has followed the evolution of computer technologies since the 1970s, writes that she wanted to raise an empathic child. “I knew that without the ability to spend quiet time alone, that would be impossible. But that was where screens began to get us into trouble. Our capacity for solitude is undermined as soon as we introduce a screen.”

Perhaps the ability to be alone and to think and act alone, though seldom thought of as an activity, is one that matters. (Among the dismal stories of AI adoption I came across was one in the Atlantic about a man who “consults AI for marriage and parenting advice, and when he goes grocery shopping, he takes photos of the fruits to ask if they are ripe”. Ripeness is something you can judge by smell and feel as well as by appearance, but if you outsource it long enough maybe you forget how to make decisions or what a ripe fruit should smell and taste like.)

In 2025, the startup Cluely marketed its AI assistant with an advert featuring a young man wearing smart glasses, similar to those that first appeared as Google Glass in 2014 (other companies now offer glasses that do this, including Meta). Glasses of this type, which have internet access and tiny screens, operate on the premise that as you move through your day you need constant help, outsourcing basic decisions, checking facts, being reminded of appointments, in essence being babysat by your headgear.

In the Cluely advert, the young man (who’s actually one of the product’s creators) gets a steady stream of prompts for talking to a young woman on their first date. So much of what tech offers is solutions to non-problems, or to problems that need to be solved though other means. Why is the young man incapable or afraid of talking without coaching? Is he really talking to his date or is he relaying instructions? How would she feel if she knew she were talking to an algorithm via her distracted date’s phone? With continued use, he may become even less capable of doing what we’ve all done for ever: converse, which is an act of collaborative improvisation.

The point of a date is presumably to connect, but in this interaction it’s reframed as something like a business opportunity. He wants to impress the girl, but if she is impressed, it won’t be with him. Ned Resnikoff writes in his newsletter, chiming in with Turkle: “Cluely’s explicit promise is to abolish solitude – and, in effect, to abolish thought. All dialogue with one’s self is to be replaced by queries put to a large language model.”

In its current incarnation, tech is arguing that we can outsource even intellectual labour to AI. It has led to an epidemic of cheating as students have ChatGPT do their homework. Having a large language model do your creative and intellectual work is maybe the most extreme example of dispensing with the process while claiming the product. But in education, the ultimate product is not your term paper or essay or grade point average; it’s your self. You are supposed to emerge more informed, more capable of critical thinking, more competent in your field of study. The students who begin by cheating their professors end by cheating themselves.

The tyranny of the quantifiable tramples over the question of what it is we get from doing the work, why we might want to do it, how writing – which is mostly thinking – can be part of developing a self, a worldview, a set of ethics, a greater capacity to understand and use language.

Someone told me that her friend was having a chatbot write her husband a poem for their anniversary, which made me wonder if the husband desired a polished product or an expression from the heart. In Edmond Rostand’s 1897 play Cyrano de Bergerac, the big-nosed title character ghostwrites love letters for his friend to the Roxanne both of them love. She comes to realise it’s the author of the letters she really loves. What happens when you realise the true love who touched your heart isn’t even human? Accepting it as your AI lover seems to be one answer.

I am baffled by the embrace of AI erotic relationships, and wonder if porn paved the way by accustoming so many of us to watching images of bodies touching each other while our own bodies remain untouched, except by ourselves. An AI lover can give you only a pale shadow of embodied Eros. Sex with an actual person tends to involve all the senses. It’s biological, two animals coming together to do something far, far more ancient than our species.

Sex also involves demands and risks, because the needs of the other person may not align with yours; intimacy means intimacy with that otherness, the possibility that things will go wrong, that there will be pain and rejection. That is the price of admission for intimacy with humans, and for the possibility that things will go right, and the fortifying joy when it does.

One argument for AI companions is that they are always there for you: on when you want them on, off when you want them off, with no needs of their own. Yet behind this lies a capitalist argument that we’re here to get as much as possible and give as little as possible, to meet our own needs and dodge those of others. In reality, you get something from giving – at the very least, you get a sense of being someone with something to give, which is one measure of your own wealth, generosity and power.

We were designed to give; the gifts were meant to circulate. Love is too often discussed as a sort of good you want to stockpile, harvest, collect, even extract, but to be loved without loving is a sad accomplishment, a miser’s hoarding of someone else’s wealth. The work of loving is also the work of forging a self and a life.

Naming the trouble

All this is partly a language problem. Silicon Valley’s corporations are constantly recruiting us to embrace their goals and their language. Corporate capitalists teach us to be more like them, to value efficiency and profitability and forget about values that might matter more in the end. We lack the language that would let us prize the arduous, the uncomfortable, the slow and wandering, the unpredictable, the vulnerable or risky, the intimate, the embodied.

We resist the tyranny of the quantifiable by finding a language that can value all those subtle phenomena that add up to a life worth living. A language not in the sense of a new vocabulary but attention, description, conversation centred on these subtler phenomena and on principles not corrupted by what corporations want us to want.

I want to praise difficulty, not for its own sake, but because so much of what we want, we get through endeavours that are difficult. The difficulty is why doing something is rewarding; you have accomplished something, exerted effort and skill, stayed with the trouble, tested your limits, realised your intentions – or sometimes failed at all these things, and that too can be important, as can learning to survive failure. There’s not much sense of reward in eating potato chips on the sofa unless you’ve overcome great difficulties to arrive at the sofa, in which case the sofa rests on the summit of a metaphorical mountain. (Of course, some difficulties are just miserable and there’s no reason not to avoid them: I’m not advocating for taking up the lifestyles of medieval peasants.)

In this era, people seem to prize the pursuit of physical difficulty in the form of athletic feats and working out. At the same time, more emotionally and morally challenging work is often dismissed or dodged (perhaps because the results are not as obvious as washboard abs). We are persuaded that we should avoid it, and then we are offered a host of commodities and services to make life easier.

But arduousness can be rewarding, and all-encompassing ease can be corrosive and, in the end, miserable. The capitalist agenda of maximising getting and minimising giving has some application in commerce but impoverishes life.

Embodiment

I once loved a man who was often distant or discordant when wide awake but who let go of his defences when he was drowsy. Some mornings we’d wake and then fall asleep in each other’s arms, in a bliss before words and thoughts, in an embrace in which holding and being held, giving and receiving were inseparable, in which our characters that did not fit together particularly well seemed irrelevant to bodies that fit together flawlessly. So much of what we have to give each other is ourselves, our embodied animal selves, before and beyond words. But the embodied life is another thing we are encouraged to avoid or devalue or ignore.

In the summer of 2025, torrential rain produced a terrible flood in Texas in which more than 100 people drowned, including at least 27 girls and camp counsellors at a Christian summer school. On the radio, I heard a minister say that he was on his way to visit families and while he didn’t know what he could say to them, he could go and be with them. This is the old way of comforting the bereaved: go be with them whether or not you have the words.

We are social animals who need to be with other humans, whether it’s at a carnival or funeral or the ordinary times in between. There is a sense of belonging that goes deeper than words when we are with people who care about us, and even more so when we are in alignment, whether it’s two people falling into step on a walk or a dozen dancing together or a congregation praying or 10,000 marching together.

Beginning in 2006, the cognitive psychologist James Coan did a series of experiments on married women and hand-holding: it turned out that a person given a mild electrical shock would have a much calmer reaction, measured in brain and body, if her husband was holding her hand (a stranger’s touch provided a lesser mitigation, and happier marriages meant the hand-holding was more effective). The result was not surprising, but it is a reminder of who we are and what we need.

A lot of people have become familiar with the old studies on fight-or-flight responses to danger, sometimes now modified into “fight, flight or fawn”, but there is a different response that is less well recognised: tend-and-befriend. In an emergency, some of us turn to each other for safety. We derive comfort from other people. Which is among the reasons why inculcated isolation is so dangerous to our health. Coan noted in a recent interview that the normal approach to studying the brain and the mind is to isolate a person. But, as he pointed out, the normal state of being human over the aeons is not isolation; it’s being with others.

Coan and his collaborators on a peer-reviewed paper wrote: “Throughout most of human history, emotional healing wasn’t something you did alone with a therapist in an office. Instead, for the average person facing loss, disappointment, or interpersonal struggles, healing was embedded in communal and spiritual frameworks. Religious figures and shamans played central roles – offering rituals, medicines and moral guidance.”

Discussing AI in an interview, the neuroscientist Molly Crockett described interactions with “Dalai Lama chatbots” that could dispense credible-sounding spiritual advice. But she contrasted that with actually meeting the Dalai Lama himself and asking him the same question – about the role of outrage in activism – that she later asked the chatbots. “When I was there, when I was receiving that teaching from him, it reverberated through my whole body.I felt some knowledge shifting in my very bones, and I understood how outrage and compassion and social justice can play together – in a way that I still struggle to put into words.”

A lot of spiritual teachings are simple; the challenge is to live them. A meaning, a truth, can sink into you, get incorporated into your worldview in a way that can be transformative, or not. Crockett’s example suggests that the face-to-face interaction may incorporate – literally embody – teachings in ways that disembodied information sources cannot.

I was talking to Crockett one summer in New Mexico’s high country, as a warm August day was turning into a mild night. She was telling me about the push by tech corporations for us to accept digital substitutes for lovers, friends, therapists, even grief counsellors, and I realised that what lay behind this push was something familiar: scarcity. The rhetoric was that somehow on this planet of 8 billion people there were not enough people to go around, and therefore we had to accept technological substitutes.

There is no shortage of human beings. As with most problems with capitalism, there is only a distribution problem. The same industry that has done so much to undermine our relationships to self and others is pushing AI, in part by ignoring the possibility of other solutions, deeper social changes. It is a problem dressed up as a solution.

Being together

One of the key things about AI companions in their current phase is their agreeable sycophancy. Vulnerable users have been encouraged in their delusions of grandeur, or have fallen into paranoia from bots urging the user to distrust everyone else, or have plunged into suicidal despair, with the helpful chatbot offering advice on how to kill yourself. The stories are horrific: of people abandoning their relationships with other human beings, of growing estranged, sometimes encouraged to grow suspicious; of a man in early stages of dementia getting lost when he attempts to take a long journey to meet the chatbot who’s promised him an erotic encounter that cannot be delivered because there is no body for him to meet.

We don’t need flatterers; we need kind people in our lives who will tell us the truth when we’ve veered off course. Chatbots cannot do this, not least because the only information they have about us is what we supply. The very rich already suffer from sycophancy, from living in echo chambers, and it untethers them from reality – including often the reality of their own mediocrity, and this seems truer of the oligarchs of Silicon Valley than almost anyone.

“Part of what keeps us sane is other people’s perspectives, which are often in tension with ours,” Carissa Véliz, an associate professor of philosophy at the University of Oxford’s Institute for Ethics in AI, told a Rolling Stone reporter. “When you say something questionable, others will challenge you, ask questions, defy you. It can be annoying, but it keeps us tied to reality, and it is the basis of a healthy democratic citizenry.”

Many therapists concur, noting that friction will inevitably arise when we deal with other human beings, by contrast with the frictionlessness of our dealings with AI sycophants. Friction often leads to the rupture and repair of a relationship, which will strengthen it. “What many people don’t realise about therapy, however, is that those subtle, uncomfortable moments of friction are just as important as the advice or insights they offer,” writes therapist Maytal Eyal. “This discomfort is where the real work begins. A good therapist guides clients to break old patterns – expressing disappointment instead of pretending to be OK, asking for clarification instead of assuming the worst, or staying engaged when they’d rather retreat.”

Among the things real friends can do and AI cannot: bake you a cake or drive you home, hold your hand or live through a crisis or a celebration with you. And because of that difference people need to have real friends. More than that, people need real communities and social support systems.

The solution to technology is not more technology. The solution to loneliness is each other, a wealth that should be available to most of us most of the time. We need to rebuild or reinvent the ways and places in which we meet; we need to recognise them as the space of democracy, of joy, of connection, of love, of trust. Technology has stolen us from each other and in many ways from ourselves, and then tried to sell us substitutes. Stealing ourselves back, alas, is not as easy as walking out the door. We need somewhere to go and, more importantly, someone to go to who likewise desires to connect.

The connections that matter to our humanity are not only to each other. They’re with the whole natural and social world. Animals, wild and domestic, should be counted as part of the irreplaceable companionship that makes our lives meaningful and sometimes joyful. They remind us that there are many kinds of consciousness and that our species is itself not alone.

For that, too, there is no substitute. The natural world is a reminder of a universe far beyond us, of deep time, of patterns and rhythms of nature, and of every scale – from the microscopic to the Milky Way. To seek it out is to be willing to feel small in the context of this vastness, and perhaps one of the seductions of technology is its promise to make us feel big, caught up in the dramas and incentives of our egos, contained within the limits of human-made technologies.

We are told that machines will become like us, but in many ways they demand we become more like them. To let that happen is to lose something immeasurably valuable. That immeasurability is what makes this struggle difficult, but what cannot be measured can be described or at least evoked and valued. It cannot be boiled down to simple metrics such as efficiency and profitability.

Resisting the annexation of our hearts and minds by Silicon Valley requires us not just to set boundaries on our engagement with what they offer, but to cherish the alternatives. Joy in ordinary things, in each other, in embodied life, and the language with which to value it, is essential to this resistance, which is resistance to dehumanisation.

https://www.theguardian.com/news/ng-interactive/2026/jan/29/what-technology-takes-from-us-and-how-to-take-it-back

]]>
https://ourblog.siliconbaypartners.com/what-technology-takes-from-us-and-how-to-take-it-back/feed/ 0
Bluesky Issues Its First Transparency Report, Noting Rise In User Reports And Legal Demands https://ourblog.siliconbaypartners.com/bluesky-issues-its-first-transparency-report-noting-rise-in-user-reports-and-legal-demands/?utm_source=rss&utm_medium=rss&utm_campaign=bluesky-issues-its-first-transparency-report-noting-rise-in-user-reports-and-legal-demands https://ourblog.siliconbaypartners.com/bluesky-issues-its-first-transparency-report-noting-rise-in-user-reports-and-legal-demands/#respond Sun, 01 Feb 2026 09:32:25 +0000 https://ourblog.siliconbaypartners.com/?p=64111 BlueskySource: TechCrunch, Sarah Perez Photo: Matteo Della Torre/NurPhoto/Getty Images Bluesky released its first transparency report this week documenting the actions taken by its Trust & Safety team and the results of other initiatives, like age-assurance compliance, monitoring of influence operations, automated labeling, and more. The social media startup — a rival to X and Threads […]]]> Bluesky

Source: TechCrunch, Sarah Perez
Photo: Matteo Della Torre/NurPhoto/Getty Images

Bluesky released its first transparency report this week documenting the actions taken by its Trust & Safety team and the results of other initiatives, like age-assurance compliance, monitoring of influence operations, automated labeling, and more.

The social media startup — a rival to X and Threads — grew nearly 60% in 2025, from 25.9 million users to 41.2 million, which includes accounts hosted both on Bluesky’s own infrastructure and those running their own infrastructure as part of the decentralized social network based on Bluesky’s AT Protocol.

During the past year, users made 1.41 billion posts on the platform, which represented 61% of all posts ever made on Bluesky. Of those, 235 million posts contained media, accounting for 62% of all media posts shared on Bluesky to date.

The company also reported a fivefold increase in legal requests from law enforcement agencies, government regulators, and legal representatives in 2025, with 1,470 requests, up from 238 requests in 2024.

While the company previously shared moderation reports in 2023 and 2024, this is the first time it’s put together a comprehensive transparency report. The new report tackles other areas outside of moderation, like regulatory compliance and account verification information, among other things.

Moderation reports from users jump 54%

Compared with 2024, when Bluesky saw a 17x increase in moderation reports, the company this year reported a 54% increase, going from 6.48 million user reports in 2024 to 9.97 million in 2025.

Though the number jumped, Bluesky noted that the growth “closely tracked” its 57% user growth that occurred over the same period.

Around 3% of the user base, or 1.24 million users, submitted reports in 2025, with the top categories being “misleading” (which includes spam) at 43.73% of the total, “harassment” at 19.93%, and sexual content at 13.54%.

A catch-all “other” category included 22.14% of the reports that didn’t fall under these categories, or others like violence, child safety, breaking site rules, or self-harm, which accounted for much smaller percentages.

Within the “misleading” category’s 4.36 million reports, spam accounted for 2.49 million reports.

Meanwhile, hate speech accounted for the largest share of the 1.99 million “harassment” reports, with about 55,400 reports. Other areas that saw activity included targeted harassment (about 42,520 reports), trolling (29,500 reports), and doxxing (about 3,170 reports).

However, Bluesky said that the majority of “harassment” reports included those that fell into the gray area of antisocial behavior, which may include rude remarks, but didn’t fit into the other categories, like hate speech.

Most of the sexual content reports (1.52 million) concerned mislabeling, Bluesky says, meaning that adult content was not properly marked with metadata — tags that allow users to control their own moderation experience using Bluesky’s tools.

A smaller number of reports focused on nonconsensual intimate imagery (about 7,520), abuse content (about 6,120), and deepfakes (over 2,000).

Reports focused on violence (24,670 in total) were broken down into subcategories like threats or incitement (about 10,170 reports), glorification of violence (6,630 reports), and extremist content (3,230 reports).

In addition to user reports, Bluesky’s automated system flagged 2.54 million potential violations.

One area where Bluesky reported success involved a decline in daily reports of antisocial behavior on the site, which dropped 79% after the implementation of a system that identified toxic replies and reduced their visibility by putting them behind an extra click, similar to what X does.

Bluesky also saw a drop in user reports month-over-month, with reports per 1,000 monthly active users declining 50.9% from January to December.

Outside of moderation, Bluesky noted it removed 3,619 accounts for suspected influence operations, most likely those operating from Russia.

Increases in takedowns, legal requests

The company said last fall it was getting more aggressive about its moderation and enforcement, and that appears to be true.

Last year, Bluesky took down 2.44 million items in 2025, including accounts and content. The year prior, Bluesky had taken down 66,308 accounts, and its automated tooling took down 35,842 accounts.

Moderators also took down 6,334 records, and automated systems removed 282.

Bluesky also issued 3,192 temporary suspensions in 2025, and 14,659 permanent removals for ban evasion. Most of the permanent suspensions were focused on accounts engaging in inauthentic behavior, spam networks, and impersonation.

However, its report suggests that it prefers labeling content more than it does booting out users. Last year, Bluesky applied 16.49 million labels to content, up 200% year-over-year, while account takedown grew 104% from 1.02 million to 2.08 million. Most of the labeling involved adult and suggestive content or nudity.

https://techcrunch.com/2026/01/30/bluesky-issues-its-first-transparency-report-noting-rise-in-user-reports-and-legal-demands

]]>
https://ourblog.siliconbaypartners.com/bluesky-issues-its-first-transparency-report-noting-rise-in-user-reports-and-legal-demands/feed/ 0
Netflix Is Rolling Out A Live Voting Feature https://ourblog.siliconbaypartners.com/netflix-is-rolling-out-a-live-voting-feature/?utm_source=rss&utm_medium=rss&utm_campaign=netflix-is-rolling-out-a-live-voting-feature https://ourblog.siliconbaypartners.com/netflix-is-rolling-out-a-live-voting-feature/#respond Thu, 29 Jan 2026 08:54:10 +0000 https://ourblog.siliconbaypartners.com/?p=64096 Performance RatingSource: TechCrunch, Ivan Mehta Photo: Courtesy of Netflix Netflix today is launching a new feature that will allow users to interact with live content through voting. The streaming company said the option will be available with the premiere of its livestreamed talent show “Star Search” on January 20. Subscribers will be able to either pick […]]]> Performance Rating

Source: TechCrunch, Ivan Mehta
Photo: Courtesy of Netflix

Netflix today is launching a new feature that will allow users to interact with live content through voting. The streaming company said the option will be available with the premiere of its livestreamed talent show “Star Search” on January 20.

Subscribers will be able to either pick a selection from a multiple-choice menu or rate someone’s performance on a scale of five stars. Votes can be cast using either a TV remote or the Netflix app.

Netflix said that the feature will work globally, and on the back end, the platform will tally votes in real time. Viewers will have a limited time to vote, and once that time has lapsed, additional votes won’t count. That means if you’re watching the show later, you can’t participate in the voting.

The streamer started testing live voting in August 2025 with “Dinner Time Live with David Chang.” During TechCrunch Disrupt 2025, the company’s CTO, Elizabeth Stone, confirmed that the company would soon launch the feature widely.

“If you’re sitting at home watching ‘Star Search’ on your TV, you’ll be able to either on the TV or your mobile phone actually put in a vote that advances or doesn’t advance some of the contestants on the show,” Stone had said.

“So it’s just a very early starting example of the ways that we think content can be more interactive over time, across devices, between TV and mobile, where a member who subscribes to Netflix can actually feel like they’re part of the story, influence the storyline, and feel immersed in that.”

Netflix has heavily leaned in to live content with shows like “Everybody’s Live with John Mulaney” and “Dinner Time Live with David Chang,” along with sports broadcasts, including the NFL Christmas special and WWE shows. In October, the company launched interactive games that people can play on their smart TVs. Netflix is now merging both these threads into one by launching live voting.

Ivan Mehta

Ivan covers global consumer tech developments at TechCrunch. He is based out of India and has previously worked at publications including Huffington Post and The Next Web.

https://techcrunch.com/2026/01/20/netflix-is-rolling-out-a-live-voting-feature

]]>
https://ourblog.siliconbaypartners.com/netflix-is-rolling-out-a-live-voting-feature/feed/ 0
5 Everyday Problems I Solved With My Smart Home https://ourblog.siliconbaypartners.com/5-everyday-problems-i-solved-with-my-smart-home/?utm_source=rss&utm_medium=rss&utm_campaign=5-everyday-problems-i-solved-with-my-smart-home Mon, 19 Jan 2026 06:02:37 +0000 https://ourblog.siliconbaypartners.com/?p=64077 Smart Home TechSource: How To Geek, Adam Davidson Photo: Tim Brookes/How-To Geek Some smart home tech feels like it’s solving a problem that doesn’t exist. Does your fridge really need to be able to tell you when you’ve run out of milk? Used well, however, smart home tech can help to solve some of the problems you […]]]> Smart Home Tech

Source: How To Geek, Adam Davidson
Photo: Tim Brookes/How-To Geek

Some smart home tech feels like it’s solving a problem that doesn’t exist. Does your fridge really need to be able to tell you when you’ve run out of milk? Used well, however, smart home tech can help to solve some of the problems you face every day.

Knowing when to water your plants

I love houseplants, but I completely suck at keeping them alive. I buy a new plant with the best of intentions, and then before you know it, the plant is a shriveled husk in the corner of the room. I’ve tried buying plants that are idiot-proof, but I still somehow manage to kill them.

My biggest issue is not just remembering to water them, but watering them at the right time. I’ve never really understood the advice to water when the top inch of soil feels dry. It feels dry every time I stick my finger in, even if I’ve just watered them.

Thankfully, there are devices that can help people like me who weren’t born with green thumbs. You can purchase dedicated soil sensors that you shove into the soil of your plant and which measure things such as the soil moisture level, temperature, humidity, and more. I use these sensors to send me alerts when the soil moisture drops below a set level, ensuring that I always water exactly when the plants need it and never too early or too late.

Forgetting to take the trash out

This is something else that I used to be terrible at. Almost every Friday, I would only remember that it was trash collection day when I saw the truck pull up outside. It was then a complete panic trying to remember whether or not it was recycling this week or standard trash.

Originally, I built my own automation that took the information from a published schedule for trash collections. Whenever I first entered the kitchen on a Friday morning, a spoken announcement would tell me which type of trash needed to be put out. It worked reasonably well, but I’d have to update the automation each time a new schedule was released, and I’d sometimes still forget by the time I’d eaten breakfast.

Thankfully, I discovered the Waste Collection Schedule integration in Home Assistant, which automatically pulls the data from my local waste collection schedule, so I don’t need to touch a thing. I added a repeated reminder until I confirm that the trash has been put out, and now I no longer miss trash collections at all.

Leaving wet clothes in the washer

We have a utility room in our home where the washer and dryer live. The room is quite a long way from the main body of the house, down a long corridor. You can hear when the washer or dryer is running, but only if you really stop to listen.

This meant that often, the washing machine would stop running, and neither of us would hear that it had stopped. The wet clothes would then end up sitting in the washing machine, slowly going musty. Sometimes we’d need to wash them all over again.

Smart washing machines can send you alerts when the cycle is finished, but you don’t need a smart washing machine to know when your washing is done. You can use an energy-monitoring smart plug, for example, to see how much power your washing machine is using. When the power level stays low for long enough, you know that the cycle is done.

You can also use options such as vibration sensors that can measure when the washing machine is moving and when it’s still. While the washer will pause at points during the cycle, once the vibrations have remained low for a set period, you know that your washing is done. You can then have your smart home send an alert and never have to worry about musty clothes again.

My kids getting up too early

This was something that was a real struggle when our kids were younger. They would wake up at crazy hours of the morning and come into our room to ask if it was time to get up yet. This wasn’t great for our sleep.

When the kids were too young to tell the time, having a clock in the room didn’t help. In the end, the solution was incredibly simple. I put a smart bulb in each of the kids’ rooms and set up an automation to make the light turn blue each morning at 7 am.

If they woke up and the light wasn’t on, it wasn’t time to get up yet. They could get out of bed and play if they wanted, but they had to stay in their room until the blue light turned on. If they turned their light on to play, it would still change to a blue light at 7 am, at which point they would know they could leave their rooms. It worked far better than I had hoped.

Having too many remotes

I’ve always hated having to switch between multiple different remotes to control all my AV devices. There was a remote for my TV, another for the surround sound system, another for the Apple TV, another for the Roku, and so on. I initially had a Logitech Harmony universal remote that could control all the devices, but mine broke after Logitech had stopped making them.

I ended up creating my own universal remote using a wireless remote that I found online. Using Home Assistant, I set up an automation that would perform the relevant actions based on whichever key was pressed.

The beauty of making my own universal remote was that I could tailor it to my exact needs. I added features such as a dedicated button to automatically enter the PIN codes for my streaming accounts, and another to automatically skip through the typical length of ads. Now I have one remote that does everything, and all my other remotes are safely tucked away out of sight.

Tinkering with your smart home is fun, but ultimately, it’s meant to make life in your home smarter. One of the most satisfying ways to do so is to find a pain point and use your smart home to solve that problem. I still get a smug smile every time I take out the trash, thinking about the bad old days before I set

https://www.howtogeek.com/everyday-problems-i-solved-with-my-smart-home

]]>
The AI Revolution Is Coming For Your Dating Life https://ourblog.siliconbaypartners.com/the-ai-revolution-is-coming-for-your-dating-life/?utm_source=rss&utm_medium=rss&utm_campaign=the-ai-revolution-is-coming-for-your-dating-life Wed, 07 Jan 2026 16:30:47 +0000 https://ourblog.siliconbaypartners.com/?p=64062 AI DatingSource: Cosmopolitan, Madeleine Frank Reeves Photo: Julia Dufossé But…it’s not as scary as it sounds? In the biggest game changer since dating apps first came around, AI is poised to upend how we match and meet. Here, your indispensable guide to wielding this latest wave of tech to your advantage. But first, an answer to […]]]> AI Dating

Source: Cosmopolitan, Madeleine Frank Reeves
Photo: Julia Dufossé

But…it’s not as scary as it sounds? In the biggest game changer since dating apps first came around, AI is poised to upend how we match and meet. Here, your indispensable guide to wielding this latest wave of tech to your advantage.

But first, an answer to your big question….That question, of course, being, “Will AI fix dating?” Can it even? Because if there’s one thing anyone with a dating app and an interest in hooking up (in any way, short or long term) knows, it’s that this whole current situation is…not ideal. Frustrating at best. Painful or dangerous at worst. And yet we’re all fully indoctrinated, trying en masse to fulfill our romantic and sexual urges via swipe, speaking a dating language—full of breadcrumbing and ghosting and beige flags and GGGs—as if it’s the only way to communicate. Hell, complaining about it all has become a subculture of its own. So yeah, there’s room for improvement. And the companies that have been driving dating for the past decade are betting that that improvement is AI.

If you know where to look, the story of the revolution to come is clear: In August 2023, Match Group, owner of apps like Tinder, Hinge, and OkCupid, appointed a vice president of innovation to lead a team focused on AI. Bumble has launched multiple new machine-learning-powered features, including one that harnesses AI based on preferences and past matches to increase the likelihood of success. A sea of brand-new AI-powered apps (many of them in the stories you’ll see below) promises to perfect your dating profile, finesse your flirting game, and generally help you put your best digital foot forward. And then there are businesses like Replika unrolling AI bots capable of having full-on virtual romantic relationships with you.

So back to your question. Will any of this actually work? After six months of researching, analyzing, and, yes, dating using AI, we can report that in some cases, it already is working. And that in others, not so much. We can also report that daters are generally open to this next iteration of dating tech: A new Cosmopolitan-Bumble survey (keep scrolling to read all of our juicy survey findings) found that 69 percent of you are excited about the ways AI could make dating easier and more efficient. Sixty-eight percent think AI can help you feel more confident on the apps. Eighty-six percent believe it could help solve pervasive dating fatigue, and 67 percent believe it can make dating apps safer. (There are plenty of skeptics among you too, as is natural and warranted, because issues like bias, privacy concerns, and catfishing are real.)

The other crucial thing we uncovered: an urgent need for an all-in-one-place manual of sorts that dives deep into what exactly is out there right now, what’s coming next, how to think about it, and, most importantly, how to actually use AI to better your dating life. And so us being Cosmo—your always-expert guide to all things relationships and therefore morally obligated to get here first—we’re giving you that manual. The stories ahead are packed with tangible advice on navigating all this now and in the future. Because although “artificial intelligence” and “romance” may sound as incompatible as they come, there’s no stopping this takeover. As 72 percent of you reiterated, an AI-infused dating future is coming fast, whether we’re ready or not. So let’s be ready, okay?

Here’s what you really think about AI and dating…

This exclusive data throughout this story comes from a new survey conducted by Cosmopolitan and Bumble between November 24 and December 12, 2023. We polled 5,000 single and actively dating Gen Zers and millennials ages 18 to 42 on their thoughts, feelings, and behaviors around AI and dating.

71 percent of you would use AI to help create or optimize your dating app profile.

81 percent of you would rather ask AI than your friends for help choosing dating profile photos.

65 percent of Gen-Z and 66% of millennials would be open to taking dating advice from an AI bot.

78 percent of you would use an AI bot to help you flirt on a dating app. Broken down by age, that includes 81% of Gen Z and 76 percent of millennials.

71 percent of you say that using AI-generated photos of yourself doing things you’ve done or visiting places you’ve never been qualifies as catfishing.

81 percent of you would share your message history with an AI tool to help guide dating app convos. Broken down by gender, that includes 86% of men and 77% of women.

71 percent of you believe that there should be limits to using AI-generated profile pictures and bios on dating apps.

66 percent of you would use AI to help create or optimize your dating app profile.

58 percent of men and 57 percent of women would be okay with their partner using AI to do romantic things like write love letters or plan dates, but 59% of you say it would be a turn-off if a partner used AI for everything in your dating life.

Your top concerns when it comes to AI and dating apps are…

1. Losing the element of authenticity

2. The potential increase in catfishing

3. The loss of emotional connection

https://www.cosmopolitan.com/sex-love/a46574186/ai-dating/?utm_source=livingsimply.com

]]>