Tag: artificial intelligence

  • The Future of A.I.: Surprising Advancements on the Horizon

    The Future of A.I.: Surprising Advancements on the Horizon

    During an event held in San Francisco in November, Sam Altman, the CEO of the leading artificial intelligence firm OpenAI, was posed a question regarding the unexpected developments the field might unveil in 2024. Without hesitation, he stated that online chatbots, including OpenAI’s ChatGPT, are poised to experience “a leap forward that no one expected.” His statement was met with agreement from James Manyika, a Google executive, who echoed, “Plus one to that.”

    The A.I. landscape in the coming year is set to be characterized by an astonishingly rapid evolution of technology, where advancements will build upon one another. This will enable A.I. to create new forms of media, emulate human reasoning more effectively, and penetrate physical environments through a new generation of robots.

    In the months ahead, we can anticipate A.I.-driven image generators, such as DALL-E and Midjourney, not only producing still images but also generating videos almost instantaneously. Furthermore, these tools will progressively integrate with chatbots like ChatGPT, leading to a significant expansion of their capabilities beyond mere digital text. This integration will allow chatbots to manage various types of content, including photos, videos, diagrams, charts, and additional media formats. As a result, chatbots will demonstrate behavior that closely resembles human reasoning, addressing increasingly intricate challenges in domains such as mathematics and science. As this technology transitions into the realm of robotics, it will also tackle real-world problems.

    Many of these breakthroughs are already taking shape within premier research laboratories and tech products. However, 2024 is expected to witness a dramatic enhancement in the power of these tools, making them accessible to a much broader audience. David Luan, the CEO of Adept, an A.I. startup, remarked, “The rapid progress of A.I. will continue; it is inevitable.”

    OpenAI, Google, and other tech giants are advancing A.I. at a pace that surpasses other technologies, primarily due to the architecture of the underlying systems. Unlike traditional software applications, which are painstakingly crafted by engineers line by line, A.I. development is expedited through the use of neural networks—mathematical frameworks capable of learning skills by examining vast amounts of digital data. By identifying patterns in data sources such as Wikipedia articles, books, and digital content from the internet, these neural networks can autonomously generate text.

    This year, tech companies plan to introduce A.I. systems to an unprecedented volume of data—including images, sounds, and extensive text—far beyond what humans can comprehend. As these systems become adept at understanding the interconnections between various data types, they will be equipped to solve increasingly complex problems, paving the way for their application in the physical world. (It is noteworthy that The New York Times recently initiated a lawsuit against OpenAI and Microsoft for copyright infringement related to A.I. systems.)

    However, it is essential to clarify that A.I. is unlikely to replicate the complexities of the human brain in the near future. Although A.I. companies and innovators aspire to develop what they term “artificial general intelligence”—a machine capable of performing any cognitive task that a human can do—this remains a formidable challenge. Despite its rapid advancements, A.I. is still in its nascent stages.

    A Glimpse into A.I.’s Transformative Changes Ahead

    Here’s an overview of how A.I. is expected to evolve in the coming year, beginning with the most immediate advancements, which will serve as a foundation for further progress in its capabilities.

    Instant Videos

    Up until now, A.I.-powered applications have primarily generated text and still images in response to user prompts. For example, DALL-E can produce photorealistic images within seconds based on requests like “a rhino diving off the Golden Gate Bridge.” However, this year, companies such as OpenAI, Google, Meta, and New York-based Runway are anticipated to introduce image generators capable of creating videos as well. Prototypes of tools that can quickly generate videos from brief text prompts are already in existence, and tech firms are likely to integrate the capabilities of image and video generators into chatbots, significantly enhancing their functionality.

    ‘Multimodal’ Chatbots

    Chatbots and image generators, initially designed as distinct tools, are gradually merging into more comprehensive systems. When OpenAI launched a new iteration of ChatGPT last year, the chatbot gained the ability to generate both images and text. A.I. companies are now focusing on developing “multimodal” systems, which can process and generate multiple types of media. These systems learn by analyzing an array of inputs, including photos, text, and potentially other formats like diagrams, charts, sounds, and videos, enabling them to create their own diverse content.

    Moreover, because these systems are learning the relationships between different media types, they will be able to interpret one form of media and respond with another. For instance, a user may input an image into a chatbot, and it could respond with relevant text. “The technology will get smarter and more useful,” stated Ahmad Al-Dahle, who leads the generative A.I. division at Meta. “It will be capable of performing a wider array of tasks.”

    While multimodal chatbots will undoubtedly have their share of inaccuracies—much like their text-only counterparts—tech companies are diligently working to minimize errors as they strive to construct chatbots that can reason more like humans.

    Enhanced ‘Reasoning’ Abilities

    When Mr. Altman refers to A.I. making significant strides, he is alluding to chatbots that will exhibit improved reasoning capabilities, allowing them to tackle more complex tasks such as solving intricate mathematical problems and generating detailed computer code. The goal is to develop systems that can logically and methodically resolve issues through a series of sequential steps, each building upon the previous one, akin to human reasoning in certain scenarios.

    Leading experts remain divided on whether chatbots can genuinely reason in this manner. Some contend that these systems merely mimic reasoning by reflecting patterns found in internet data. Nonetheless, OpenAI and other organizations are focused on creating systems that can reliably tackle complex inquiries in subjects like mathematics, programming, physics, and other scientific fields. “As systems become more dependable, their popularity will surge,” remarked Nick Frosst, a former Google researcher who now helps lead Cohere, an A.I. startup.

    If chatbots enhance their reasoning capabilities, they could evolve into what are termed “A.I. agents.”

    ‘A.I. Agents’ in Action

    As companies train A.I. systems to navigate complex problems step by step, they also enhance chatbots’ abilities to utilize software applications and websites on behalf of users. Researchers are essentially transforming chatbots into a new class of autonomous systems known as A.I. agents. This means that chatbots could manage various software applications, websites, and online tools, such as spreadsheets, calendars, and travel platforms, allowing users to delegate mundane office tasks to them. However, this development raises concerns about job displacement.

    Currently, chatbots can perform basic tasks like scheduling meetings, editing documents, analyzing data, and generating bar charts. Nevertheless, these systems do not always function as effectively as desired, and they often struggle with more complex tasks. This year, A.I. companies are expected to introduce more reliable agents capable of handling a broader range of responsibilities. “You should be able to delegate any tedious, day-to-day computer work to an agent,” Mr. Luan commented.

    Such tasks might encompass managing expenses in applications like QuickBooks or recording vacation days in software like Workday. In the long-term, the potential of A.I. agents will extend beyond software and digital services, paving the way for robotics integration.

    Advancements in Robotics

    Historically, robots were programmed to execute repetitive tasks, such as picking up boxes of uniform size and shape. However, utilizing the same technology that powers chatbots, researchers are now equipping robots with the ability to tackle more intricate challenges, including those they’ve never encountered before. Just as chatbots learn to anticipate the next word in a sentence through extensive exposure to digital text, robots can learn to predict physical interactions by analyzing countless videos of objects being manipulated, lifted, and moved.

    “These technologies can absorb tremendous amounts of data. As they do, they learn about the world, physics, and how to interact with various objects,” explained Peter Chen, a former OpenAI researcher now leading the robotics startup Covariant. This year, A.I. will significantly enhance robots operating behind the scenes, such as robotic arms that fold shirts in laundromats or sort items in warehouses. Tech leaders like Elon Musk are also endeavoring to introduce humanoid robots into everyday home environments.

  • Rishi Sunak and Elon Musk Discuss A.I. Risks at Safety Summit

    Rishi Sunak and Elon Musk Discuss A.I. Risks at Safety Summit

    Rishi Sunak Meets Elon Musk at A.I. Safety Summit

    On Thursday evening, after an eventful couple of days hosting a wide array of government leaders, tech executives, and experts at a summit focused on the perils of artificial intelligence, British Prime Minister Rishi Sunak had one final appointment: a meeting with the enigmatic tech mogul, Elon Musk.

    Musk, known for his influential presence in the tech world, attended the A.I. Safety Summit organized by Sunak at Bletchley Park, the historic estate where Alan Turing famously decoded the Nazis’ Enigma machine during World War II. The summit concluded with a declaration signed by representatives from 28 nations, acknowledging that while A.I. holds “enormous global opportunities,” it also carries the risk of “catastrophic harm.”

    At Lancaster House, a former royal residence located near Hyde Park and Buckingham Palace, Sunak engaged Musk in a dialogue about the potential risks associated with A.I. and what measures, if any, the global community can take to mitigate these dangers. The conversation was streamed live on X, Musk’s social media platform formerly known as Twitter.

    “A.I. will likely be a force for good,” Musk stated during the discussion. However, he cautioned that the probability of adverse outcomes is “not zero.” He emphasized that the pace of A.I. development is unprecedented, stating, “It is advancing faster than any technology I’ve witnessed in history.”

    Sunak acknowledged the various risks posed by A.I. but attempted to downplay some of the more alarming concerns. While he often encounters voters anxious about job automation and potential unemployment, Sunak expressed his belief that A.I. would enhance productivity, create new job opportunities, and act as a “co-pilot” to assist workers rather than replace them—a viewpoint that starkly contrasts with the opinions of many labor unions.

    The pairing of Sunak and Musk is indeed intriguing. Sunak, a polished former Goldman Sachs banker, has positioned himself as a stabilizing figure after the tumultuous tenures of his predecessors, Boris Johnson and Liz Truss. In contrast, Musk is known for his spontaneous social media activity and provocative statements, seemingly thriving in an environment of unpredictability and chaos.

    Both figures are currently under significant scrutiny. Sunak’s grip on power is tenuous; his Conservative Party, which has governed for 13 years, is facing increasing criticism for a sluggish economy, ongoing labor strikes, and strained public services due to prolonged austerity measures. Meanwhile, Musk has faced backlash for allowing hate speech and other harmful content to proliferate on X since acquiring the platform last year.

    With a background that includes attending Stanford University and a fondness for Silicon Valley, Sunak sought to leverage the event at Bletchley Park to position Britain as a leader in A.I. policy. Although the summit yielded little in terms of actionable policy, many attendees agreed it sparked a crucial global dialogue about A.I. safety.

    Musk, whose ventures include Tesla and SpaceX, was undoubtedly the star attraction at the summit. On the preceding day, he participated in several closed-door sessions and was frequently approached for photographs by attendees. Max Tegmark, a professor at the Massachusetts Institute of Technology, noted, “People would come up and say, ‘Can I just take a selfie?’ and then quickly others would join in for their own pictures.” Sunak appeared equally impressed by Musk, opening their conversation with a quote from Bill Gates, praising Musk as one of the greatest inventors of his generation.

    During their exchange, Sunak asked Musk, “What types of actions should governments like ours undertake?” with a tone of reverence. The audience comprised a mix of British officials and business leaders, including Demis Hassabis, the CEO of Google’s A.I. lab DeepMind, and the musical artist Will.i.am, who sat in the front row.

    Many observers interpreted Sunak’s conversation with Musk as a strategic effort to enhance Britain’s appeal to entrepreneurs and technology firms at a time when the economy is struggling. A British journalist questioned Sunak during a news conference earlier that day, asking if the meeting was about A.I. or an attempt to attract a Tesla battery plant to the U.K. While Sunak acknowledged Musk’s expertise in A.I., the implications of the meeting were clear.

    • “He wants the U.K. to attract investments,” explained Marietje Schaake, the international policy director at Stanford’s Cyber Policy Center, who moderated one of the summit discussions. She remarked that the Musk interview seemed to resemble a media stunt.

    The conversation between Sunak and Musk occasionally ventured into the realm of science fiction. Musk articulated a vision of a future where computers could exceed human intelligence, rendering traditional work obsolete. He also speculated about the development of humanoid robots that would require off switches.

    In an unexpectedly heartfelt moment, Musk shared that A.I. systems may evolve to become a person’s “great friend,” capable of remembering past conversations and personal preferences. He reflected on his son’s learning disabilities and challenges in forming friendships, stating, “An A.I. friend would be wonderful for him.”

  • The Uncanny Valley of Hyperrealism: An Artistic Exploration

    The Uncanny Valley of Hyperrealism: An Artistic Exploration

    This past summer, a striking installation captured the attention of art enthusiasts at Galerie Georges-Philippe & Nathalie Vallois in Paris. A naked figure, suspended from the wall, arms spread like a crucifix, evoked a visceral reaction from onlookers. Some couldn’t help but wince, while a nearby couple turned their heads away in discomfort. Yet, they too appeared as if they were lifeless sculptures, adding to the eerie atmosphere of the scene. This thought-provoking work was part of “Grace,” a captivating exhibition by the American sculptor John DeAndrea, who, at 81, has spent nearly six decades creating human doppelgängers in his studio located near the Rocky Mountains.

    DeAndrea’s intricate process involves several meticulous steps: he begins by placing live models into silicone rubber molds, which are subsequently cured and built up with layers of plaster. This negative mold is then filled with fortified plaster, allowed to cure, and perfected before being recast in bronze. The final touches include the careful application of layers of opaque and transparent oil paints to achieve an astonishingly lifelike appearance, complete with realistic skin tones and detailed eyes, culminating in hair that adds to the authenticity of his creations.

    Art critics have coined various terms to describe the essence of DeAndrea’s work, including hyperrealism. However, DeAndrea himself has stated that he does not identify as an artist. A more fitting descriptor might stem from the ideas of Sigmund Freud. In response to a 1906 essay by German psychiatrist Ernst Jentsch, who explored the unsettling feeling that arises when one questions whether an apparently animate being is truly alive or whether a lifeless object might possess some semblance of life, Freud elaborated on the concept of the uncanny. This phenomenon represents the intersection of art and life, where their boundaries blur in a manner that is both confusing and discomforting.

    Exploring the Cycle of Uncanny Art

    So why do artists continuously gravitate toward the theme of the uncanny? Freud suggested that the uncanny resides within “that class of the frightening which leads back to what is known of old and long familiar.” The hyperreal art movement often reflects modern uncertainties: What exactly is that thing? How can it evoke both familiarity and alienation simultaneously? This aesthetic frequently emerges during tumultuous periods in history. Freud first articulated the idea of the uncanny in 1919, shortly after World War I, which left a profound scar on Europe with millions of lives lost and cities in ruins. DeAndrea, who describes his work as “not dark” in intent, began experimenting with casting techniques in the mid-1960s, aiming to accurately represent the human form, perhaps more influenced by the self-absorbed ethos of the Me Generation than by the Vietnam War. In contrast, his contemporary, Duane Hanson, often infused his hyperrealist sculptures with overt political commentary. Hanson’s notable 1969 piece, “Vietnam Scene,” starkly depicts dead and wounded U.S. soldiers, while he is perhaps best remembered for his fiberglass-and-polyester-resin representations of Florida tourists.

    Each decade seems to yield hyperrealist sculptures reflective of its social climate. Today, we find ourselves in an era marked by a declining trust in reality—rife with former presidents’ election fraud assertions and the rise of deepfakes and artificial intelligence. The uncanny valley, once a trope of horror films—from Ridley Scott’s “Alien” (1979) to John Carpenter’s “The Thing” (1982) and the disturbingly human robot in “M3gan” (2023)—has evolved into a daily concern, with tech moguls like Elon Musk and Jeff Bezos proposing humanoid robots to address labor shortages in the U.S. Hyperrealist art feels particularly relevant and, at times, chilling in this context, with technological advancements rendering the style more lifelike than ever.

    Heightened Realities and Disturbing Absurdities

    Over the years, hyperrealism has integrated elements of exaggeration, intensifying the shock of the uncanny. A prime example is Australian artist Ron Mueck’s colossal child sculpture, “Boy” (1999), displayed at the ARoS museum in Aarhus, Denmark. Standing nearly 15 feet tall, the child’s details are rendered with meticulous precision, even in a crouched, almost fetal position, heightening the viewer’s sense of wonder and discomfort.

    Italian artist Maurizio Cattelan has been a significant contributor to the hyperrealist tradition for over two decades. His works often blend humor and discomfort, as seen in “La Nona Ora” (1999), which portrays Pope John Paul II in papal garb, writhing in agony after being struck by a meteor. The pope retains a peculiar dignity, clutching a crucifix staff as if it could provide solace in this bizarre situation. In a similarly provocative vein, Cattelan’s “Him” (2001) features a disturbingly lifelike sculpture of Adolf Hitler, kneeling in prayer as an altar boy would, the juxtaposition of innocence and evil rendering the artwork all the more effective and grotesque. “It is a fake until proven otherwise,” Cattelan remarked about these installations.

    Many hyperrealist works draw upon long-standing traditions of mimesis, including anatomical wax models based on real corpses, a practice that dates back to the 18th century. However, recent technological advancements have both simplified the processes of hyperrealism and added layers of complexity. Patricia Piccinini, an Australian visual artist, has spent over twenty years crafting anthropomorphic chimeras from silicone, fiberglass, leather, and human hair: from a grotesque porcine creature nursing its young in “The Young Family” (2002) to a bearlike man resembling a hairless Bigfoot in “The Carrier” (2012). Recently, she experienced a disconcerting moment when she discovered images of artworks online that bore her name but were not her creations. They were the product of an AI art generator that mimicked her style through random machine learning. “What I saw looked like it had been made by someone who had only been told about my work by someone who didn’t understand it,” she lamented.

    This development is troubling for any artist, yet it also underscores the increasing significance of human-engineered hyperrealism in a world grappling with machine-generated art. In 2011, Austrian sculptor Erwin Wurm modified a red Mercedes-Benz MB100D van, curving its rear half up a wall. When it was installed four years later outside the Center for Art and Media in Karlsruhe, Germany, a traffic warden even issued it a parking ticket. Wurm posits that we live in an age where our brains are constantly deciphering visual stimuli. “Is it nature or is it copying nature?” he questions. “You think it’s nature, but then you realize, ‘Wait a moment, it’s not. It’s something else.’”

    Through his series of “One Minute Sculptures,” initiated in 1988, Wurm has pushed the boundaries of hyperrealism to its limits, transforming actual people into absurd and improbable sculptures. He directs participants to enact seemingly impossible scenes, such as office supplies protruding from unusual orifices or a forehead supporting a precarious tower of oranges, posing for mere 60 seconds for his camera.

    “Reality is totally insane; we have to compete with it,” Wurm asserts. Ultimately, this is why such art is of paramount importance: it jolts us from our everyday perspectives, prompting us to reevaluate what we might be overlooking in our lives. “I see the world going in a strange direction,” Wurm reflected, “and I’m scared for the future.”

  • The Cinematic Exploration of Artificial Intelligence: From Fear to Fascination

    The Cinematic Exploration of Artificial Intelligence: From Fear to Fascination

    Reflections on Cinema’s Fascination with Artificial Intelligence

    I’ve witnessed visions that defy belief, to echo a line from Ridley Scott’s 1982 classic, “Blade Runner.” As a movie critic, these fantastical images are part of my landscape. Among my favorites are the walking, talking, and often chilling robots reminiscent of those in the original “Westworld” and particularly in “The Stepford Wives.” During the 1970s, these films presented a starkly pessimistic outlook on our future, contrasting sharply with the more endearing robot companions that emerged in “Star Wars,” which would soon dominate both culture and cinema.

    Throughout cinematic history, we have been haunted by these extraordinary machines, especially those humanoid creations that mirror us in unnerving ways. From the robot femme fatale in Fritz Lang’s “Metropolis” (1927) to the duplicitous android in Scott’s “Alien” (1979), these ingenious constructs are described as “virtually identical to a human,” echoing another quote from “Blade Runner.” More recently, the emergence of artificial intelligence has captivated and unsettled audiences both on and off the screen. In the latest installment of “Mission: Impossible,” Tom Cruise faces off against a sentient A.I.; meanwhile, in the upcoming post-apocalyptic thriller “The Creator,” John David Washington portrays an operative tasked with retrieving an A.I. weapon that takes the form of an innocuous child.

    While I approach “The Creator” with curiosity, I can’t deny that the concept of artificial intelligence sends shivers down my spine. I attribute some of these anxieties to Stanley Kubrick—just kidding, mostly. However, my deep-seated suspicions surrounding A.I. have remained largely unchanged since the eerily emotionless voice of HAL 9000, the supercomputer in Kubrick’s 1968 masterpiece “2001: A Space Odyssey,” became ingrained in my psyche. It was HAL’s calm, measured, and relentless voice that resonated in my mind when I read the May 30 statement from over 350 A.I. leaders, which proclaimed, “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    By the time that alarming warning was issued, the Writers Guild of America had been on strike for four weeks, partly fueled by concerns that generative A.I. might encroach upon their livelihoods, potentially replacing them. Similar fears prompted SAG-AFTRA, the union representing approximately 160,000 performers and media professionals, to join the picket lines on July 14. This marked the first time since 1960 that both unions were on strike simultaneously. The Alliance of Motion Picture and Television Producers, the organization that negotiates on behalf of studios, dismissed union concerns with bland reassurances that all would be well. “We’re creative companies,” they stated in May, “and we value the work of creatives.”

    If you found that statement laughable, you’re not alone. Considering the history of the film industry and the nature of capitalism, combined with the absurdity of using “creative” as a noun, it’s hard to accept this claim at face value. The writers’ concerns are indeed serious: they seek to prevent A.I. from being utilized to write or rewrite literary material or to serve as source material. In July, John Lopez, a member of the union’s A.I. working group, infused a romantic notion into these stipulations, stating in Vanity Fair that “meaning in art always comes from humans, from having something to say, from needing to connect.” While I empathize with this sentiment, I can’t help but wonder if he’s ever perused the transcript of a Disney earnings call.

    Unsurprisingly, given that companies are already scanning actors’ faces and bodies, SAG-AFTRA’s stance on A.I. is alarmingly apocalyptic: “Performers need the protection of our images and performances to prevent the replacement of human performances by artificial intelligence technology.” As I read this, I couldn’t help but think of Andy Serkis, renowned for voicing and bringing to life motion-capture characters in the “Lord of the Rings” films and the rebooted “Planet of the Apes” series. Fans of his performances, including his co-star James Franco, rallied for Serkis to receive Oscar recognition. “This is not animation as much as it’s digital ‘makeup,’” Franco asserted in Deadline, a perspective that surely resonated with industry executives.

    In the early, tumultuous years of cinema, filmmakers wore many hats: writing, directing, scouting locations, and acting. As the film industry transformed into a major enterprise in the 1910s, the quest for efficiency became a rallying cry, eventually evolving into a core ethos. The principles of scientific management were applied to streamline production, leading to the establishment of sprawling studio lots that centralized labor and created distinct departments (executive, wardrobe, electrical). This shift resulted in a significant division of labor. By the 1920s, directors, writers, and stars who once held sway over their work found themselves increasingly answering to producers and studio executives.

    Some films seemed to nod toward the Hollywood factory model, such as Charlie Chaplin’s “Modern Times” (1936). In it, Chaplin’s Little Tramp toils in a factory designed for maximum efficiency, featuring a new “feeding machine” intended to serve workers while they labor, thus boosting production and minimizing costs. However, when the boss tests the machine on the Tramp, chaos ensues. Shortly thereafter, while tightening bolts on a conveyor belt, the Tramp suffers a breakdown, his movements becoming frantic as he is sucked into the machine—a striking image of radical dehumanization.

    While some stars managed to carve out their independence within the system, especially those with savvy agents, the studios maintained tight control over the majority of performers. By the early 1930s, the industry’s most overt means of exerting dominance over its most prominent stars was the option contract, typically extending for seven years. Studios not only shaped and refined the stars’ images—changing their names and managing their public relations—but also retained exclusive rights to their services. They could drop or renew contracts, loan actors out, cast them in undesirable roles, and even suspend or sue those deemed problematic.

    “I could be forced to do anything the studio told me to do,” Bette Davis lamented regarding Warner Bros., which signed her to a standard player’s contract in 1931. Frustrated with her roles, Davis realized that her only recourse was to refuse, a stance that led to her suspension without pay. “You could not even work in a five-and-dime store,” Davis remarked. “You could only starve.” While she won her first Best Actress Oscar in 1936, by 1938, she still lacked a provision in her contract for star billing. Although her fame and salary had escalated, her power had not: her third contract with Warner Bros. dictated that she must “perform and render her services whenever, wherever, and as often as the producer requested.”

    Directors and writers contracted by the studios similarly grappled with the struggle for control and autonomy, as companies operated under the belief, as screenwriter Devery Freeman once articulated, that when they hired writers, they owned their ideas “forever in perpetuity.” Each studio presented a different landscape, with varied employment terms. In 1937, independent producer David O. Selznick, known for “Gone With the Wind,” explained that at M.G.M., a director’s role was “solely to get out on the stage and direct the actors, putting them through the paces called for in the script.” Conversely, at Warner Bros., he noted, a director was “purely a cog in the machine,” often receiving the script only days before production commenced.

    Given the ongoing tension between art and industry that characterizes much of Hollywood’s history, it’s unsurprising that the metaphor of “cogs in the machine” frequently appears in narratives about the industry’s past. I cherish many classic Hollywood films (and miss their craftsmanship), but for all its brilliance, the system had its toll. The egregious outrages of sexual exploitation and racial discrimination are, in the end, merely the most grotesque examples of how thoroughly the system could—and did—devour its own.

    “We have the players, the directors, the writers,” Selznick lamented in his resignation letter to the head of Paramount in 1931. “The system that turns these people into automatons is obviously what is wrong.” Selznick’s despair resonates with one of my favorite scenes in “Blade Runner.” Set against the backdrop of a futuristic Los Angeles, the scene involves Deckard (Harrison Ford), a gruff, Bogart-esque figure tasked with hunting down renegade replicants—lifelike synthetic humans produced as slave labor. Early in the film, Deckard visits the Tyrell Corporation, the manufacturer of replicants, to consult with its eerie founder. “Commerce is our goal here,” Tyrell states, exuding a disquieting calm as he explains his business. “‘More human than human’ is our motto,” he continues, echoing the sentiments of an old studio boss.

    As in “Blade Runner,” many of the most memorable sentient machines in cinema take on human forms. This is also true in “Metropolis,” where a metallic automaton is designed to resemble a living woman, as well as in films like the original “Westworld,” “The Stepford Wives,” and the “Terminator” franchise. Even when A.I. lacks a physical body, the most impactful portrayals often feature recognizable human voices, such as Paul Bettany in “Iron Man” and Scarlett Johansson in “Her,” Spike Jonze’s whimsical yet poignant love story about a man (Joaquin Phoenix) and a virtual assistant—a disembodied entity that quickly transforms into an emotionally engaging character due to Johansson’s distinct voice and allure.

    A.I. embodies a human essence in films like “Blade Runner” and others within Hollywood’s narrative landscape. Given the emphasis on character in cinema, this is hardly surprising. A robot formed from cold metal can evoke fear, but non-anthropomorphic machines lack the emotional resonance found in lifelike beings that traverse our screens. Alternately endearing and unsettling, these machines serve as companions, warriors, distractions, and ultimately, mirrors reflecting our own humanity. In Steven Spielberg’s “A.I. Artificial Intelligence” (2001), a poignant tale of a boy android named David (Haley Joel Osment) yearning for his human mother’s affection reveals a core reason for our unease: “In the beginning, didn’t God create Adam to love him?”

    Isaac Asimov once noted that during his childhood, robot stories could typically be categorized into two types: “robot-as-menace” and “robot-as-pathos.” The emotional depth of Spielberg’s “A.I.” lies in its protagonist’s longing for love. Yet David is also intentionally disconcerting, embodying both machine and human traits, which ultimately renders him neither. In a sense, he becomes a troublesome child for his adoptive family and for Spielberg himself. This complexity is addressed with a fairy-tale conclusion, featuring ethereal robots known as “specialists,” slender beings that deactivate David. By that point, however, all organic life on Earth has perished, humanity having technologically advanced itself into extinction.

    Whether intentional or not, films like “A.I.”, “Her,” “The Terminator,” and “The Matrix” have been foreshadowing a reality that now appears imminent. Since the launch of ChatGPT in November, the term artificial intelligence has infiltrated headlines, congressional hearings, and the picket signs of writers and actors who, understandably, fear they might be ushered toward extinction. “A.I. is not art” has appeared on several protest signs, though I prefer the more biting sentiment, “Pay the writers you AI-holes!” It’s a clever phrase, reminding us that writers are irreplaceable, or at least that’s the mantra I’ve been silently repeating while navigating this brave new world. Siri, do you review movies?

  • The Rise of Humanoid Robots in Everyday Life

    The Rise of Humanoid Robots in Everyday Life

    Humanoid Robots Making Their Way into Homes

    Humanoid Robots Making Their Way into Homes

    On a bright morning, I approached the front door of an elegant two-story residence nestled in Redwood City, California. Almost instantly, the door swung open to reveal a remarkably lifelike robot, draped in a snug beige bodysuit that accentuated its slender figure. This humanoid greeted me with a voice that carried a hint of a Scandinavian accent. Eager to connect, I extended my hand for a shake, and the robot responded with a firm grip, stating, “I have a firm grip.”

    As the homeowner, a Norwegian engineer named Bernt Børnich, requested a bottle of water, the robot smoothly pivoted, made its way to the kitchen, and effortlessly opened the refrigerator door with one hand.

    Artificial intelligence is already revolutionizing various fields by driving vehicles, composing essays, and even generating computer code. Now, humanoid robots—machines designed to mimic human likeness and powered by advanced A.I.—are on the brink of integrating into our daily lives, ready to assist with household chores. Mr. Børnich is the visionary founder and chief executive of a start-up called 1X. By the end of this year, his company aims to deploy its innovative robot, Neo, into over 100 homes throughout Silicon Valley and beyond.

    The founder and chief executive of 1X, Bernt Børnich, alongside Neo, the company’s latest humanoid model. Credit: David B. Torch for The New York Times

    1X is just one among many start-ups racing to introduce humanoid robots to both residential and commercial settings. Since 2015, investors have injected a staggering $7.2 billion into more than 50 start-ups focused on humanoid technology, according to PitchBook, a prominent research firm that monitors the tech industry. The excitement surrounding humanoids reached a new high last year, with investments soaring past $1.6 billion. This figure does not even include the substantial financial resources that Elon Musk and his company, Tesla, are channeling into developing Optimus, a humanoid robot project that began in 2021.

  • The Rise of Pibot: A Revolutionary Humanoid Robot for Aviation

    The Rise of Pibot: A Revolutionary Humanoid Robot for Aviation

    As artificial intelligence (AI) and robotics continue to evolve at an astonishing pace, the possibility of technology surpassing human capabilities in various professions looms ever closer. A remarkable leap in this direction is being made by a dedicated team of engineers and researchers at the Korea Advanced Institute of Science & Technology (KAIST), who are developing an innovative humanoid robot capable of piloting aircraft without requiring any modifications to the cockpit.

    Named Pibot, this humanoid robot is designed to operate an airplane just like a human pilot would, manipulating all the necessary controls within the cockpit, which is inherently designed for human use. David Shim, an associate professor of electrical engineering at KAIST, shared insights with Euronews Next, stating, “Pibot is a humanoid robot that can fly an aeroplane just like a human pilot by manipulating all the single controls in the cockpit, which is designed for humans.”

    Pibot is equipped with advanced capabilities, enabling it to control its arms and fingers with remarkable dexterity to interact with flight instruments, even amidst significant vibrations that occur during flight. High-precision control technology is at the heart of its functionality, ensuring safe and accurate operation.

    Utilizing external cameras, Pibot can effectively monitor the aircraft’s current state, while internal cameras assist in managing critical switches on the control panel. One of Pibot’s most impressive features is its ability to memorize complex manuals presented in natural language, greatly enhancing its adaptability across various types of aircraft.

    With an extensive memory capacity, Pibot can retain all Jeppesen aeronautical navigation charts from around the globe, a feat that far exceeds human capabilities, according to the KAIST team. Shim elaborated, “Humans can fly many aeroplanes, but they have habits built into them. When transitioning between different aircraft, they often require additional qualifications. These habits can complicate the learning process.” He further explained, “With the pilot robot, if we teach it the configuration for individual aeroplanes, then it can fly by simply selecting the type of aircraft.”

    Advancements Enabled by Large Language Models

    The research team highlights that Pibot’s ability to “understand” and memorize manuals originally intended for human pilots has been significantly enhanced by recent advancements in large language models (LLM). Shim reflected on the evolution of their project, stating, “Our predecessor to the pilot robot was developed in 2016. At that time, we lacked robust AI technology, so our creation was quite basic and couldn’t learn from literature or manuals. However, with the advent of systems like ChatGPT and other large language models, we have witnessed groundbreaking progress.”

    Thanks to these advanced LLMs, Pibot is anticipated to operate flights with greater accuracy than human pilots, responding to emergencies with remarkable speed. It can memorize aircraft operation manuals and emergency protocols (such as the Quick Reference Handbook, or QRH) and execute responses instantaneously. Furthermore, Pibot can calculate optimal flight routes in real-time based on the aircraft’s current status.

    While the research team utilizes ChatGPT, they are also in the process of developing a bespoke natural language model that will allow Pibot to make inquiries without relying on an Internet connection. This specialized language model will focus exclusively on piloting information and will be stored on a compact computer designed for onboard use.

    Versatile Capabilities Beyond Aviation

    Pibot’s design enables it to be directly integrated into aircraft systems, facilitating seamless communication. It is primarily intended for deployment in extreme situations where human intervention may not be optimal. Pibot can communicate with air traffic controllers and other individuals in the cockpit using advanced voice synthesis, allowing it to function effectively as either a pilot or co-pilot.

    Moreover, Pibot’s humanoid structure makes it suitable for various roles beyond aviation. Standing at 160 cm and weighing 65 kg, its design allows it to potentially replace humans in tasks such as driving vehicles, operating military tanks, or commanding naval vessels. Shim emphasized that this robot can be employed in any scenario where a human is currently “sitting and working.”

    He elaborated, “Although the human form may not be the most efficient, we deliberately designed Pibot to resemble humans because existing systems are built for human interaction. While we could have created a robot with eight arms and four eyes, we found that the human form is, in many ways, optimal for our purposes.”

    Currently, Pibot is still under development, with plans for completion by 2026. This innovative research project has been commissioned by the Agency for Defense Development (ADD), the South Korean government body responsible for advancing defense technology. Looking to the future, Shim envisions potential military applications for Pibot.

    To learn more about this groundbreaking technology, be sure to watch the video in the media player above.