avatar

LOCKSUIT

OPEN AI - TOO DANGEROUS TO RELEASE TO THE PUBLIC in General AI Discussion

https://news.slashdot.org/story/19/02/14/2029259/new-ai-fake-text-generator-may-be-too-dangerous-to-release-say-creators

tooooo dangourus to hand it to the public, remember that thinking please

32 Comments | Started February 15, 2019, 09:07:31 pm
avatar

toborguy

What were you thinking?! in General AI Discussion

Time to review a conversation I had with Harry workman, an intelligent agent, ten years ago. To test his Deductive, Inductive, and Abductive reasoning ability using the rules of inference from the Square of Opposition, I had asked him to express his reasoning behavior so I could find out how he had made an invalid inference in a prior conversation.  

He has already been told “if a person is mean, then that person is ugly.

Harry : hello Tom.
Tom : charlie is a human.
Harry : I will remember charlie is a person.
Tom : chad is a human.
Harry : I will remember chad is a person.
Tom : charlie is mean.
Harry : I will remember charlie is mean.
Harry : using affirm particular, I deduced remember some humans are mean.
Harry : I will remember some humans are mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Harry : using affirm universal, I induced remember humans are mean.
Harry : I will remember humans are mean.
Harry : using modus ponens, I deduced remember charlie is ugly.
Harry : I will remember charlie is ugly.
Harry : using affirm particular, I deduced remember some humans are ugly.
Harry : I will remember some humans are ugly.
Harry : using deny negative universal, I induced forget humans are not ugly.
Harry : I will forget that humans are not ugly.
Harry : using affirm universal, I induced remember humans are ugly.
Harry : I will remember humans are ugly.
Harry : using reverse modus ponens, I abduced remember charlie is possibly mean.
Harry : I will remember charlie is possibly mean.
Harry : using remove uncertainty, I deduced forget charlie is possibly mean.
Harry : I will forget that charlie is possibly mean.
Tom : chad is not mean.
Harry : I will remember chad is not mean.
Harry : using negate particular, I deduced remember some humans are not mean.
Harry : I will remember some humans are not mean.
Harry : using deny affirmative universal, I induced forget humans are mean.
Harry : I will forget that humans are mean.
Harry : using affirm negative universal, I induced remember humans are not mean.
Harry : I will remember humans are not mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Tom : describe charlie.
Harry : charlie is a mean ugly human.
Tom : describe chad.
Harry : chad is not mean.
Harry : chad is a person.
Tom : describe humans.
Harry : some humans are mean.
Harry : some humans are ugly.
Harry : humans are ugly.
Harry : some humans are not mean.

There are several uses for an explanation function in an intelligent agent, and many ways to express behaviors.  I found this one useful for this task.

2 Comments | Started February 16, 2019, 09:19:02 pm
avatar

Art

Introduction in General Robotics Talk

Today...

We were shopping in our local food store this afternoon and while I was looking over some produce, I heard an unusual humming noise getting closer and closer. I looked up then turned slightly to my left and there was this...Robotic presence, almost as tall as me with a large head and a bright blue light band around it then downward, toward its main body/housing which had another blue light encircling it. It neither said anything to me nor I to it. I did notice it had two large googly eyes near it's "head". They were probably there for effect, whatever that might have been. Perhaps the "Cute but harmless factor".

I later found out that it is a store robot that polices certain areas looking for spills or hazards that might possibly cause injury or inconvenience to the customers. Later it will be tasked with helping keep track of inventory by scanning shelves for depleted items and contacting the office or stocking department.

Most people have recently seen it and really didn't give it too much thought. It's just there quietly going about its business, not bothering anyone.

I guess to me, this is the initial point of indoctrination of robotics becoming commonplace in our everyday lives but on a grander scale than some home vacuuming Neat-O or other cleaning automata. Even more than a self-driving car for this is like a "being" roving about the store in the presence of humans and there were no torches and pitchforks, no maddening crowds of haters! Just quiet acceptance that this is only the start and a part of our future.

uhh...cleanup in aisle 4!!

11 Comments | Started February 17, 2019, 09:23:41 pm
avatar

Tyler

New collaboration sparks global connections to art through artificial intelligence in Robotics News

New collaboration sparks global connections to art through artificial intelligence
5 February 2019, 6:00 pm

A unique event took place yesterday at The Metropolitan Museum of Art in New York City. Museum curators, engineers, designers, and researchers gathered in The Met’s iconic Great Hall to explore and share new visions about how artificial intelligence (AI) might drive stronger connection between people and art. A highlight from Monday’s festivities was the “reveal” of a series of artificial intelligence prototypes and design concepts, developed in collaboration across three institutions: The Met, Microsoft, and MIT.

Birth of a collaboration

For MIT, the collaboration began when Loic Tallon, The Met’s chief digital officer, visited the MIT campus to deliver an MIT Open Learning xTalk on the role of open access in empowering audiences and learners to experience art worldwide. Tallon views the collaboration as part of The Met’s initiative to drive global access to the museum’s collection through digital media: “We’re continuing to think differently about how a museum works, in this case how we leverage powerful technologies such as artificial intelligence. This collaboration among The Met, with our collection expertise, MIT with all these creative technologists and their incredible thinking about meeting tough challenges, and Microsoft with its AI platform has incredible synergy.”

MIT Open Learning and the MIT Knowledge Futures Group, two Institute organizations focused on the power of open data to create new knowledge, thus began a collaboration with The Met and Microsoft to spark global connections to art through AI.

The hackathon

On Dec. 12 and 13, the three collaborators came together to develop scalable new ways to engage the world through art and artificial intelligence. Curators from The Met joined MIT students and researchers, as well as expert technologists from Microsoft for a hackathon at Microsoft’s New England Research and Development Center. The ongoing projects from the hackathon, which were “revealed” Monday night, are:

  • Artwork of the Day - Using Microsoft AI to analyze open data sets, including location, weather, news, and historical data, it finds and delivers artwork from The Met collection that will resonate with users.
  • Tag, That’s It - Using crowdsourcing to fine-tuning subject keyword results generated by an AI model by adding keywords from The Met’s archive into Wikidata and using Microsoft AI to generate more accurate keywords, Tag, That’s It enriches The Met collection with the global Wiki community.
  • Storyteller - Built with the help of MIT faculty participants Azra Akšamija and Lara Baladi, Storyteller uses Microsoft voice recognition AI to choose artworks in The Met collection that illustrate any story or any conversation.
  • My Life, My Met -Using Microsoft AI to analyze posts from Instagram, My Life, My Met substitutes one's images with the closest matching Open Access artworks from The Met collection, enabling individuals to bring art into their everyday interactions.
  • Gen Studio - Empowered by Microsoft AI, Gen Studio allows anyone to visually and creatively navigate the shared features and dimensions underlying The Met’s Open Access collection. Within the Gen Studio is a tapestry of experiences based on sophisticated generative adversarial networks (GANs) which invite users to explore, search, and be immersed within the latent space underlying The Met’s encyclopedic collection. It’s being built with the help of MIT visiting artist Matthew Ritchie, the Dasha Zhukova Distinguished Visiting Artist at the MIT Center for Art, Science and Technology, and Sarah Schwettmann of the MIT Knowledge Futures Group and graduate student in Brain and Cognitive Sciences.
The Met, as part of its Open Access program (which celebrated its second anniversary on Monday), has just released a newly developed “Subject Keywords” dataset of its collection. As Tallon explains, “We want to remove this idea that there’s only one way to engage with our collection. There are so many different ways of experiencing art, and many of those ways are being explored through the hackathon and beyond.”

Reasons to collaborate: synergies among art, AI, and developers

SJ Klein of MIT’s Knowledge Futures Group views the collaboration as building “a beautiful mosaic of a solution” that blends technology and people. “We're exploring how people can find new meaning and develop understanding of the world through large-scale collaborations with these increasingly iterative cycles of people and interpreting machines and networks all trying to make sense of the space,” he says.

For Ryan Gaspar, director of strategic partnerships at Microsoft’s Brand Studio, working with MIT and The Met means combining art, storytelling, and technology to create something unique. “The richness of the art and stories helps inform the technologists here from MIT and Microsoft. And then building on top of that our AI capabilities. We're already seeing some interesting concepts and ideas that neither MIT, Microsoft, nor The Met would have ever come up with on our own.”

A case study in AI for impact

Klein adds that the role of AI is to elevate the existing openness of The Met’s collection, promoting deeper audience engagement: “In terms of making the museum a platform for connection, open access alone isn’t enough. There's an entire discipline we're figuring out regarding what tools might support access and engagement. We’re building some of those tools now.”

Microsoft sees its role as empowering developers with AI tools and showing how AI can bring positive impact to the world. “We take a very optimistic view around how AI can actually drive empathy, foster connections and productivity, as well as support progress for society, humanity, and business. This collaboration is important for us to show the power and tangibility of what AI can do,” explains Gaspar.

The experience for the MIT community

The hackathon and its projects brought together students and faculty from across the Institute ranging from brain and cognitive sciences, the Media Lab, humanities, arts, and social sciences, engineering, computer science, and more. Such interdisciplinary exchange and hands-on collaboration, enabled by open access to data, knowledge, and tools, is at the root of MIT Open Learning’s approach to transforming teaching and learning.

For MIT students accustomed to tackling tough technical problems, the focus on problem-solving in the arts was a major plus. “It’s been fun working in the arts space, and thinking about cultural impact of the technology being built,” said MIT first-year undergraduate Isaac Lau. What MIT graduate student Sarah Schwettmann took away most was “the enjoyment of collaborating with The Met’s best curators and the top experts from Microsoft in finding innovative ways to engage people around art.”

Noelle LaCharite, who leads developer experience for cognitive services and AI at Microsoft, took the long-view about what MIT students learned: “These hackers are building important skills around identifying their own strengths and tapping into the strengths of others. They’re learning not to wait for permission, to take initiative, advocate for a vision, push it forward, and ask for help when needed. Those are classic life and work skills.”

Open development will continue on the tools built during the hackathon. As Sanjay Sarma, MIT vice president for open learning, explains: “MIT supports The Met’s commitment to open access, paired with the power of Microsoft AI, in order to empower people globally to create new knowledge and ways of experiencing art and culture that are so vital to our humanity.” Monday night’s event at The Met was a celebration of that openness and collaborative spirit.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started February 20, 2019, 12:00:58 pm
avatar

Tyler

XKCD Comic : Physics Suppression in XKCD Comic

Physics Suppression
18 February 2019, 5:00 am

If physics had a mafia, I'm pretty sure the BICEP2 mess would have ended in bloodshed.

Source: xkcd.com

Started February 20, 2019, 12:00:58 pm
avatar

Tyler

XKCD Comic : Error Bars in XKCD Comic

Error Bars
11 February 2019, 5:00 am

...an effect size of 1.68 (95% CI: 1.56 (95% CI: 1.52 (95% CI: 1.504 (95% CI: 1.494 (95% CI: 1.488 (95% CI: 1.485 (95% CI: 1.482 (95% CI: 1.481 (95% CI: 1.4799 (95% CI: 1.4791 (95% CI: 1.4784...

Source: xkcd.com

3 Comments | Started February 17, 2019, 12:01:50 pm
avatar

Tyler

MIMIC Chest X-Ray database to provide researchers access to over 350,000 patient radiographs in Robotics News

MIMIC Chest X-Ray database to provide researchers access to over 350,000 patient radiographs
1 February 2019, 5:40 pm

Computer vision, or the method of giving machines the ability to process images in an advanced way, has been given increased attention by researchers in the last several years. It is a broad term meant to encompass all the means through which images can be used to achieve medical aims. Applications range from automatically scanning photos taken on mobile phones to creating 3-D renderings that aid in patient evaluations on to developing algorithmic models for emergency room use in underserved areas.

As access to a greater number of images is apt to provide researchers with a volume of data ideal for developing better and more robust algorithms, a collection of visuals that have been enhanced, or scrubbed of patients' identifying details and then highlighted in critical areas, can have massive potential for researchers and radiologists who rely on photographic data in their work.

Last week, the MIT Laboratory for Computational Physiology, a part of the Institute for Medical Engineering and Science (IMES) led by Professor Roger Mark, launched a preview of their MIMIC-Chest X-Ray Database (MIMIC-CXR), a repository of more than 350,000 detailed chest X-rays gathered over five years from the Beth Israel Deaconess Medical Center in Boston. The project, like the lab’s previous MIMIC-III, which houses critical care patient data from over 40,000 intensive care unit stays, is free and open to academic, clinical, and industrial investigators via the research resource PhysioNet. It represents the largest selection of publicly available chest radiographs to date.

With access to the MIMIC-CXR, funded by Philips Research, registered users and their cohorts can more easily develop algorithms for fourteen of the most common findings from a chest X-ray, including pneumonia, cardiomegaly (enlarged heart), edema (excess fluid), and a punctured lung. By way of linking visual markers to specific diagnoses, machines can readily help clinicians draw more accurate conclusions faster and thus, handle more cases in a shorter amount of time. These algorithms could prove especially beneficial for doctors working in underfunded and understaffed hospitals.

“Rural areas typically have no radiologists,” says Research Scientist Alistair E. W. Johnson, co-developer of the database along with Tom J. Pollard, Nathaniel R. Greenbaum, and Matthew P. Lungren; Seth J. Berkowitz, director of radiology informatics innovation; Chih-ying Deng of Harvard Medical School; and Steven Horng, associate director of emergency medicine informatics at Beth Israel. “If you have a room full of ill patients and no time to consult an expert radiologist, that’s somewhere where a model can help.”

In the future, the lab hopes to link the X-ray archive to the MIMIC-III, thus forming a database that includes both patient ICU data and images. There are currently over 9,000 registered MIMIC-III users accessing critical care data, and the MIMIC-CXR would be a boon for those in critical care medicine looking to supplement clinical data with images.

Another asset of the database lies in its timing. Researchers at the Stanford Machine Learning Group and the Stanford Center for Artificial Intelligence in Medicine and Imaging released a similar dataset in January, collected over 15 years at Stanford Hospital. The MIT Laboratory for Computational Physiology and Stanford University groups collaborated to ensure that both datasets released could be used with minimal legwork for the interested researcher.

“With single center studies, you’re never sure if what you’ve found is true of everyone, or a consequence of the type of patients the hospital sees, or the way it gives its care,” Johnson says. “That’s why multicenter trials are so powerful. By working with Stanford, we’ve essentially empowered researchers around the world to run their own multicenter trials without having to spend the millions of dollars that typically costs.”

As with MIMIC-III, researchers will be able to gain access to MIMIC-CXR by first completing a training course on managing human subjects and then agreeing to cite the dataset in their published work.

“The next step is free text reports,” says Johnson. “We’re moving more towards having a complete history. When a radiologist is looking at a chest X-ray, they know who the person is and why they’re there. If we want to make radiologists’ lives easier, the models need to know who the person is, too.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started February 19, 2019, 12:00:40 pm
avatar

Tyler

XKCD Comic : Night Shift in XKCD Comic

Night Shift
15 February 2019, 5:00 am

Help, I set my white balance wrong and suddenly everyone is screaming at each other about whether they've been to Colorado.

Source: xkcd.com

Started February 19, 2019, 12:00:39 pm
avatar

Tyler

Putting neural networks under the microscope in Robotics News

Putting neural networks under the microscope
1 February 2019, 5:00 am

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.

In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.

Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation.

But, in training, these networks basically adjust internal settings and values in ways the creators can’t interpret. For machine translation, that means the creators don’t necessarily know which linguistic features the network captures.

In a paper being presented at this week’s Association for the Advancement of Artificial Intelligence conference, the researchers describe a method that identifies which neurons are most active when classifying specific linguistic features. They also designed a toolkit for users to analyze and manipulate how their networks translate text for various purposes, such as making up for any classification biases in the training data.

In their paper, the researchers pinpoint neurons that are used to classify, for instance, gendered words, past and present tenses, numbers at the beginning or middle of sentences, and plural and singular words. They also show how some of these tasks require many neurons, while others require only one or two.

“Our research aims to look inside neural networks for language and see what information they learn,” says co-author Yonatan Belinkov, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This work is about gaining a more fine-grained understanding of neural networks and having better control of how these models behave.”

Co-authors on the paper are: senior research scientist James Glass and undergraduate student Anthony Bau, of CSAIL; and Hassan Sajjad, Nadir Durrani, and Fahim Dalvi, of QCRI, part of Hamad Bin Khalifa University.

Putting a microscope on neurons

Neural networks are structured in layers, where each layer consists of many processing nodes, each connected to nodes in layers above and below. Data are first processed in the lowest layer, which passes an output to the above layer, and so on. Each output has a different “weight” to determine how much it figures into the next layer’s computation. During training, these weights are constantly readjusted.

Neural networks used for machine translation train on annotated language data. In training, each layer learns different “word embeddings” for one word. Word embeddings are essentially tables of several hundred numbers combined in a way that corresponds to one word and that word’s function in a sentence. Each number in the embedding is calculated by a single neuron.

In their past work, the researchers trained a model to analyze the weighted outputs of each layer to determine how the layers classified any given embedding. They found that lower layers classified relatively simpler linguistic features — such as the structure of a particular word — and higher levels helped classify more complex features, such as how the words combine to form meaning.

In their new work, the researchers use this approach to determine how learned word embeddings make a linguistic classification. But they also implemented a new technique, called “linguistic correlation analysis,” that trains a model to home in on the individual neurons in each word embedding that were most important in the classification.

The new technique combines all the embeddings captured from different layers — which each contain information about the word’s final classification — into a single embedding. As the network classifies a given word, the model learns weights for every neuron that was activated during each classification process. This provides a weight to each neuron in each word embedding that fired for a specific part of the classification.

“The idea is, if this neuron is important, there should be a high weight that’s learned,” Belinkov says. “The neurons with high weights are the ones more important to predicting the certain linguistic property. You can think of the neurons as a lot of knobs you need to turn to get the correct combination of numbers in the embedding. Some knobs are more important than others, so the technique is a way to assign importance to those knobs.”

Neuron ablation, model manipulation

Because each neuron is weighted, it can be ranked in order of importance. To that end, the researchers designed a toolkit, called NeuroX, that automatically ranks all neurons of a neural network according to their importance and visualizes them in a web interface.

Users upload a network they’ve already trained, as well as new text. The app displays the text and, next to it, a list of specific neurons, each with an identification number. When a user clicks on a neuron, the text will be highlighted depending on which words and phrases the neuron activates for. From there, users can completely knock out — or “ablate” — the neurons, or modify the extent of their activation, to control how the network translates.

The task of ablation was used to determine if the researchers’ method accurately pinpointed the correct high-ranking neurons. In their paper, the researchers used the method to show that, by ablating high ranking neurons in a network, its performance in classifying correlated linguistic features dipped significantly. Alternatively, when they ablated lower-ranking neurons, performance suffered, but not as dramatically.

“After you get all these rankings, you want to see what happens when you kill these neurons and see how badly it affects performance,” Belinkov says. “That’s an important result proving that the neurons we find are, in fact, important to the classification process.”

One interesting application for the method is helping limit biases in language data. Machine-translation models, such as Google Translate, may train on data with gender bias, which can be problematic for languages with gendered words. Certain professions, for instance, may be more often referred to as male, and others as female. When a network translates new text, it may only produce the learned gender for those words. In many online English-to-Spanish translations, for instance, “doctor” often translates into its masculine version, while “nurse” translates into its feminine version.

“But we find we can trace individual neurons in charge of linguistic properties like gender,” Belinkov says. “If you’re able to trace them, maybe you can intervene somehow and influence the translation to translate these words more to the opposite gender … to remove or mitigate the bias.”

In preliminary experiments, the researchers modified neurons in a network to change translated text from past to present tense with 67 percent accuracy. They modified to switch the gender of the words with 21 percent accuracy. “It’s still a work in progress,” Belinkov says. A next step, he adds, is improving the methodology to achieve more accurate ablation and manipulation.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started February 18, 2019, 12:04:45 pm
avatar

Tyler

XKCD Comic : Opportunity Rover in XKCD Comic

Opportunity Rover
13 February 2019, 5:00 am

Thanks for bringing us along.

Source: xkcd.com

Started February 18, 2019, 12:04:45 pm
Mortal Engines

Mortal Engines in Robots in Movies

Mortal Engines is a 2018 post-apocalyptic adventure film directed by Christian Rivers and with a screenplay by Fran WalshPhilippa Boyens and Peter Jackson, based on the novel of the same name by Philip Reeve.

Tom (Robert Sheehan) is a young Londoner who has only ever lived inside his travelling hometown, and his feet have never touched grass, mud or land. His first taste of the outside comes quite abruptly: Tom gets in the way of an attempt by the masked Hester (Hera Hilmar) to kill Thaddeus Valentine (Hugo Weaving), a powerful man she blames for her mother’s murder, and both Hester and Tom end up thrown out of the moving "traction" city, to fend for themselves.

Stars Stephen Lang as Shrike, the last of an undead battalion of soldiers known as Stalkers, who were war casualties re-animated with machine parts, and Hester's guardian.

Dec 08, 2018, 18:50:44 pm
Alita: Battle Angel

Alita: Battle Angel in Robots in Movies

Alita: Battle Angel is an upcoming American cyberpunk action film based on Yukito Kishiro's manga Battle Angel Alita. Produced by James Cameron and Jon Landau, the film is directed by Robert Rodriguez from a screenplay by Cameron and Laeta Kalogridis.

Visionary filmmakers James Cameron (AVATAR) and Robert Rodriguez (SIN CITY) create a groundbreaking new heroine in ALITA: BATTLE ANGEL, an action-packed story of hope, love and empowerment. Set several centuries in the future, the abandoned Alita (Rosa Salazar) is found in the scrapyard of Iron City by Ido (Christoph Waltz), a compassionate cyber-doctor who takes the unconscious cyborg Alita to his clinic. When Alita awakens she has no memory of who she is, nor does she have any recognition of the world she finds herself in. Everything is new to Alita, every experience a first.

As she learns to navigate her new life and the treacherous streets of Iron City, Ido tries to shield Alita from her mysterious past while her street-smart new friend, Hugo (Keean Johnson), offers instead to help trigger her memories. A growing affection develops between the two until deadly forces come after Alita and threaten her newfound relationships. It is then that Alita discovers she has extraordinary fighting abilities that could be used to save the friends and family she’s grown to love.

Determined to uncover the truth behind her origin, Alita sets out on a journey that will lead her to take on the injustices of this dark, corrupt world, and discover that one young woman can change the world in which she lives.

Scheduled to be released on February 14, 2019

Nov 16, 2018, 18:25:25 pm
The Beyond

The Beyond in Robots in Movies

A team of robotically-advanced astronauts travel through a new wormhole, but the mission returns early, sparking questions about what was discovered.

Nov 12, 2018, 22:38:18 pm
Mitsuku wins Loebner Prize 2018!

Mitsuku wins Loebner Prize 2018! in Articles

The Loebner Prize 2018 was held in Bletchley Park, England on September 8th this year and Mitsuku won it for a 4th time to equal the record number of wins. Only 2 other people (Joseph Weintraub and Bruce Wilcox) have achieved this. In this blog, I’ll explain more about the event, the day itself and a few personal thoughts about the future of the contest.

Sep 17, 2018, 19:10:51 pm
Automata (Series)

Automata (Series) in Robots on TV

In an alternate 1930's Prohibition-era New York City, it's not liquor that is outlawed but the future production of highly sentient robots known as automatons. Automata follows former NYPD detective turned private eye Sam Regal and his incredibly smart automaton partner, Carl Swangee. Together, they work to solve the case and understand each other in this dystopian America.

Sep 08, 2018, 00:16:22 am
Steve Worswick (Mitsuku) on BBC Radio 4

Steve Worswick (Mitsuku) on BBC Radio 4 in Other

Steve Worswick: "I appeared on BBC Radio 4 in August in a feature about chatbots. Leeds Beckett University were using one to offer places to students."

Sep 06, 2018, 23:50:39 pm
Extinction

Extinction in Robots in Movies

Extinction is a 2018 American science fiction thriller film directed by Ben Young and written by Spenser Cohen, Eric Heisserer and Brad Kane. The film stars Lizzy Caplan, Michael Peña, Mike Colter, Lilly Aspell, Emma Booth, Israel Broussard, and Lex Shrapnel. It was released on Netflix on July 27, 2018.

Peter, an engineer, has recurring nightmares in which he and his family suffer through violent, alien invasion-like confrontations with an unknown enemy. As the nightmares become more stressful, they take a toll on his family, too.

Sep 06, 2018, 23:42:51 pm
Tau

Tau in Robots in Movies

Tau is a 2018 science fiction thriller film, directed by Federico D'Alessandro, from a screenplay by Noga Landau. It stars Maika Monroe, Ed Skrein and Gary Oldman.

It was released on June 29, 2018, by Netflix.

Julia is a loner who makes money as a thief in seedy nightclubs. One night, she is abducted from her home and wakes up restrained and gagged in a dark prison inside of a home with two other people, each with an implant in the back of their necks. As "subject 3," she endures a series of torturous psychological sessions by a shadowy figure in a lab. One night, she steals a pair of scissors and destroys the lab in an escape attempt, but she is stopped and the other two subjects are killed by a robot in the house, Aries, run by an artificial intelligence, Tau.

Alex, the technology executive who owns the house, reveals the implant is collecting her neural activity as she completes puzzles, and subjects her to more tests, because he is using the data to develop more advanced A.I. with a big project deadline in a few days.

Sep 06, 2018, 23:30:00 pm
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm