avatar

gizmowl

The Future of AI Is The Past! Pre-Internet Marketing & Branding in Future of AI

Below is an article I posted on my LinkedIn account with some of my views on the future of AI.

Of course to some it may sound simplistic but there is the marketing aspect and history is rife with great inventors who could not brand their inventions and market them very well.

We came up with the brand Gizmowl!

The Future of Artificial Intelligence? Branding And Packaging!



With all the talk about AI this and data that, one begins to wonder what the future of artificial intelligence holds.

After all, without character, what is AI but a creepy looking human robot who asks if you would like a cream and sugar in your coffee.

My name is Jesse Gilbert with Brainstorm Magic Creations.

After considerable amounts of time, more than a decade working in copywriting, advertising, branding and software development, I've come to the conclusion that the future of AI will largely be dependent on the characters that integrate it.

Perhaps I'm a bit biased because my company has created a character and a robot called Gizmowl that I believe encompasses the theme of AI with clever branding...

But a quick search for 'artificial intelligence' or 'drones' on the top crowdfunding sites will reveal a plethora of projects...

Some delivered, some failed...But one theme I noticed in common is that some of the technologies, while brilliant, lacked interesting characters and packaging to bring the technology to life.

This is the focus of my company, creating the character 'Gizmowl' that can potentially serve as packaging for a number of emerging technologies, including but not limited to artificial intelligence and drones.

What does the future of AI hold?

I don't know for sure, but I think it will be more interesting and fun with witty characters like Gizmowl who speak the wisdom of the ages and even integrate a bit of wit...

Rather than some of the more dry robotics we commonly see pictures of.

Perhaps the future is actually the past in some ways...

The artificial intelligence products that lead the way may actually find the most success with a focus on character building and branding techniques that built brand names before the internet was born.

5 Comments | Started November 30, 2017, 08:28:17 am
avatar

Freddy

The Greater Good - Mind Field in Future of AI

A friend posted this over on Datahopa, thought people here might be interested.

Quote
Would you reroute a train to run over one person to prevent it from running over five others? In the classic “Trolley Problem” survey, most people say they would. But I wanted to test what people would actually do in a real-life situation. In the world’s first realistic simulation of this controversial moral dilemma, unsuspecting subjects will be forced to make what they believe is a life-or-death decision.  

2 Comments | Started December 13, 2017, 09:43:00 pm
avatar

Tyler

Computer systems predict objects’ responses to physical forces in Robotics News

Computer systems predict objects’ responses to physical forces
14 December 2017, 4:59 am

Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and Machines, a multiuniversity, multidisciplinary project based at MIT that seeks to explain and replicate human intelligence.

Presenting their work at this year’s Conference on Neural Information Processing Systems, Tenenbaum and one of his students, Jiajun Wu, are co-authors on four papers that examine the fundamental cognitive abilities that an intelligent agent requires to navigate the world: discerning distinct objects and inferring how they respond to physical forces.

By building computer systems that begin to approximate these capacities, the researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development. Along the way, the researchers might also generate some insights useful for robotic vision systems.

“The common theme here is really learning to perceive physics,” Tenenbaum says. “That starts with seeing the full 3-D shapes of objects, and multiple objects in a scene, along with their physical properties, like mass and friction, then reasoning about how these objects will move over time. Jiajun’s four papers address this whole space. Taken together, we’re starting to be able to build machines that capture more and more of people’s basic understanding of the physical world.”

Three of the papers deal with inferring information about the physical structure of objects, from both visual and aural data. The fourth deals with predicting how objects will behave on the basis of that data.

Two-way street

Something else that unites all four papers is their unusual approach to machine learning, a technique in which computers learn to perform computational tasks by analyzing huge sets of training data. In a typical machine-learning system, the training data are labeled: Human analysts will have, say, identified the objects in a visual scene or transcribed the words of a spoken sentence. The system attempts to learn what features of the data correlate with what labels, and it’s judged on how well it labels previously unseen data.

In Wu and Tenenbaum’s new papers, the system is trained to infer a physical model of the world — the 3-D shapes of objects that are mostly hidden from view, for instance. But then it works backward, using the model to resynthesize the input data, and its performance is judged on how well the reconstructed data matches the original data.

For instance, using visual images to build a 3-D model of an object in a scene requires stripping away any occluding objects; filtering out confounding visual textures, reflections, and shadows; and inferring the shape of unseen surfaces. Once Wu and Tenenbaum’s system has built such a model, however, it rotates it in space and adds visual textures back in until it can approximate the input data.

Indeed, two of the researchers’ four papers address the complex problem of inferring 3-D models from visual data. On those papers, they’re joined by four other MIT researchers, including William Freeman, the Perkins Professor of Electrical Engineering and Computer Science, and by colleagues at DeepMind, ShanghaiTech University, and Shanghai Jiao Tong University.

Divide and conquer

The researchers’ system is based on the influential theories of the MIT neuroscientist David Marr, who died in 1980 at the tragically young age of 35. Marr hypothesized that in interpreting a visual scene, the brain first creates what he called a 2.5-D sketch of the objects it contained — a representation of just those surfaces of the objects facing the viewer. Then, on the basis of the 2.5-D sketch — not the raw visual information about the scene — the brain infers the full, three-dimensional shapes of the objects.

“Both problems are very hard, but there’s a nice way to disentangle them,” Wu says. “You can do them one at a time, so you don’t have to deal with both of them at the same time, which is even harder.”

Wu and his colleagues’ system needs to be trained on data that include both visual images and 3-D models of the objects the images depict. Constructing accurate 3-D models of the objects depicted in real photographs would be prohibitively time consuming, so initially, the researchers train their system using synthetic data, in which the visual image is generated from the 3-D model, rather than vice versa. The process of creating the data is like that of creating a computer-animated film.

Once the system has been trained on synthetic data, however, it can be fine-tuned using real data. That’s because its ultimate performance criterion is the accuracy with which it reconstructs the input data. It’s still building 3-D models, but they don’t need to be compared to human-constructed models for performance assessment.

In evaluating their system, the researchers used a measure called intersection over union, which is common in the field. On that measure, their system outperforms its predecessors. But a given intersection-over-union score leaves a lot of room for local variation in the smoothness and shape of a 3-D model. So Wu and his colleagues also conducted a qualitative study of the models’ fidelity to the source images. Of the study’s participants, 74 percent preferred the new system’s reconstructions to those of its predecessors.

All that fall

In another of Wu and Tenenbaum’s papers, on which they’re joined again by Freeman and by researchers at MIT, Cambridge University, and ShanghaiTech University, they train a system to analyze audio recordings of an object being dropped, to infer properties such as the object’s shape, its composition, and the height from which it fell. Again, the system is trained to produce an abstract representation of the object, which, in turn, it uses to synthesize the sound the object would make when dropped from a particular height. The system’s performance is judged on the similarity between the synthesized sound and the source sound.

Finally, in their fourth paper, Wu, Tenenbaum, Freeman, and colleagues at DeepMind and Oxford University describe a system that begins to model humans’ intuitive understanding of the physical forces acting on objects in the world. This paper picks up where the previous papers leave off: It assumes that the system has already deduced objects’ 3-D shapes.

Those shapes are simple: balls and cubes. The researchers trained their system to perform two tasks. The first is to estimate the velocities of balls traveling on a billiard table and, on that basis, to predict how they will behave after a collision. The second is to analyze a static image of stacked cubes and determine whether they will fall and, if so, where the cubes will land.

Wu developed a representational language he calls scene XML that can quantitatively characterize the relative positions of objects in a visual scene. The system first learns to describe input data in that language. It then feeds that description to something called a physics engine, which models the physical forces acting on the represented objects. Physics engines are a staple of both computer animation, where they generate the movement of clothing, falling objects, and the like, and of scientific computing, where they’re used for large-scale physical simulations.

After the physics engine has predicted the motions of the balls and boxes, that information is fed to a graphics engine, whose output is, again, compared with the source images. As with the work on visual discrimination, the researchers train their system on synthetic data before refining it with real data.

In tests, the researchers’ system again outperformed its predecessors. In fact, in some of the tests involving billiard balls, it frequently outperformed human observers as well.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started December 14, 2017, 12:02:19 pm
avatar

Tyler

XKCD Comic : Seven Years in XKCD Comic

Seven Years
13 December 2017, 5:00 am

[img alt=[hair in face] "SEVVVENNN YEEEARRRSSS"]https://imgs.xkcd.com/comics/seven_years.png

Source: xkcd.com

Started December 14, 2017, 12:02:16 pm
avatar

Art

Ultra Hal 7 - News Update in UltraHal

After passing through Alpha testing it has recently gone into Beta and shouldn't be as such much longer before going to RC (Release Candidate), then Final.
Although I am not privy to a public release date, I would have to think it to be within a relatively short time frame.

Most testing has gone quite well and the new Hal 7 will have a lot of really nice and productive features.

http://www.ultrahal.com/community/index.php?topic=14077.0

7 Comments | Started December 11, 2017, 02:48:22 pm
avatar

ranch vermin

new demo on the way in AI Programming

Ive got the next week all planned out.  This will be a continuous thread,  sorry for the abrupt making of threads up till now, ive definitely got something solid to do this time.

Heres a screen shot of yesterday and todays work,  next on i need animation and then im going to code the bots brain,  and its all organized start to finish I know it pretty well now.   And ive taken a bit out of the brain design which should give me something easier to make right now.  Because Ive gone a long time with no results, and I think its time to actually implement something.



13 Comments | Started November 10, 2017, 12:20:31 pm
avatar

elpidiovaldez5

Deepmind's Imagination Augmented Agents - should they use GANs ? in General AI Discussion

I just read Deepmind's paper on their new Reinforcement Learning system which uses 'imagination' during problem solving.  It's pretty cool. The features I liked are:

  • Plans ahead using an Environment Model (EM), which predicts what will happen when it takes an action in a given state.
  • Can build the environment model as it learns, or use a supplied model.
  • Deliberately robust to errors in the EM.  If the EM does not help, system learns to ignore or down-weight it, thus falling back on standard Reinforcement Learning.

The planning uses various simple look-ahead schemes, e.g.:

  • Considering all alternative actions for next step.
  • Recursively chaining actions to predict (imagine) the result of the next N steps.
  • Learning to combine the above two methods to perform plan-tree expansion - I think they never actually used this idea

Clearly the Environment Model is the interesting part of the system.  The details are a bit sketchy.  The idea is to train a neural net to take the current state, and a possible action, as input and output a probabalistic, imagined next state.  Since the input state is a pixel image giving a view of a game, the output state gives pixel probabilities.  

The EM clearly works playing Sobokan, but it occurs to me that actions are quite deterministic in this game.  i.e. if you push a box forward into an empty space, the player and the box both move forward one step.  The EM should generate near 100% probability for the new positions.  The situation would be quite different if the action was e.g. let fall a pen balanced on its end.  Here the resulting pen position is quite non-deterministic, although there is a well defined locus of positions where it might end up. A probabalistic model would 'average' together the possible positions giving a small probability to all positions on a circle.  That is not what really happens.

Hence my reason for writing this message.  Could a more sophisticated EM not be implemented using Generative Adversarial Networks.? These are well known for 'imagination' applications.  The benefit is that they  generate realistic specific outcomes.  They do not blur together multiple possibilities.  Of course one could run the GAN multiple times to search within the variation in  outcomes.  If it is run enough times the probability distribution emerges.

Of course the system just needs information from the EM to choose the next step..  It is possible that a blurry probability distribution provides this information better than actual possible future events.  Thoughts ?

12 Comments | Started October 15, 2017, 02:54:08 pm
avatar

Tyler

Four from MIT named 2017 Association for Computing Machinery Fellows in Robotics News

Four from MIT named 2017 Association for Computing Machinery Fellows
11 December 2017, 4:00 pm

Today four MIT faculty were named among the Association for Computing Machinery's 2017 Fellows for making “landmark contributions to computing.”

Honorees included School of Science Dean Michael Sipser and three researchers affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): Shafi Goldwasser, Tomás Lozano-Pérez, and Silvio Micali.

The professors were among fewer than 1 percent of Association for Computing Machinery (ACM) members to receive the distinction. Fellows are named for contributions spanning such disciplines as graphics, vision, software design, algorithms, and theory.

“Shafi, Tomás, Silvio, and Michael are very esteemed colleagues and friends, and I’m so happy to see that their contributions have recognized with ACM’s most prestigious member grade,” said CSAIL Director Daniela Rus, who herself was named an ACM Fellow in 2014. “All of us at MIT are very proud of them for receiving this distinguished honor.”

Goldwasser was selected for “transformative work that laid the complexity-theoretic foundations for the science of cryptography.” This work has helped spur entire subfields of computer science, including zero-knowledge proofs, cryptographic theory, and probabilistically checkable proofs. In 2012 she received ACM’s Turing Award, often referred to as “the Nobel Prize of computing.”

Lozano-Pérez was recognized for “contributions to robotics, and motion planning, geometric algorithms, and their applications.” His current work focuses on integrating task, motion, and decision planning for robotic manipulation. He was a recipient of the 2011 IEEE Robotics Pioneer Award, and is also a 2014 MacVicar Fellow and a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and of the IEEE.

Like Goldwasser, Micali was also honored for his work in cryptography and complexity theory, including his pioneering of new methods for the efficient verification of mathematical proofs. His work has had a major impact on how computer scientists understand concepts like randomness and privacy. Current interests include zero-knowledge proofs, secure protocols, and pseudorandom generation. He has also received the Turing Award, the Goedel prize in theoretical computer science, and the RSA prize in cryptography.

Sipser, the Donner Professor of Mathematics, was recognized for “contributions to computational complexity, particularly randomized computation and circuit complexity.” With collaborators at Carnegie Mellon University, Sipser introduced the method of probabilistic restriction for proving super-polynomial lower bounds on circuit complexity, and this result was later improved by others to be an exponential lower bound. He is a fellow of the American Academy of Arts and Sciences and the American Mathematical Society, and a 2016 MacVicar Fellow. He is also the author of the widely used textbook, "Introduction to the Theory of Computation."

ACM will formally recognize the fellows at its annual awards banquet on Saturday, June 23, 2018 in San Francisco, California.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started December 12, 2017, 12:00:16 pm
avatar

Tyler

XKCD Comic : Tinder in XKCD Comic

Tinder
11 December 2017, 5:00 am

People keep telling me to use the radio but I really hate making voice calls.

Source: xkcd.com

Started December 12, 2017, 12:00:15 pm
avatar

Gustavo6046

Neural networks and Markov: a new potential, with a single problem. in AI Programming

I have a new idea, where Markov chains build sentences, for which the connections are chosen by neural networks. The max number of normalizable connections for each Markov node is 128. The problem here is how to find out how to form the reply, where the input is the sentence that came *before*. To retrieve it, I need a neural network that gets the next node from the current reply node, AND the input sequence. Seq2seq networks are not an option.

My possible solution would be to convert a phrase into a number using of an encoder neural network, e.g. with the phrase "Hello, my dear son!" we iterate from inputs [''Hello", 0] → A, where A is the output of the neural network. Then we do it again, but with the word "my", so that ["my", A] → A + B. And so on, until we convert that phrase to A + B + C + D — where the plus sign isn't a sum, but some sort of joining, that goes on inside the neural network.

That number is then passed into a decoder neural network, such that [0, A + B + C + D] → [N₁, A + B + C + D], and [N₁, A + B + C + D] → [N₂, A + B + C + D], ..., [0, A + B + C + D]. Nₙ is denormalized into the word that corresponds to the nth node that follows the node Nₙ₋₁

What about you? Any better solutions or suggestions? :)

6 Comments | Started December 04, 2017, 04:21:16 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am
Kweri

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!

Links:

FB Messenger
https://www.messenger.com/t/kweri.chat

Telegram
https://telegram.me/kweribot

Slack
https://slack.com/apps/A5JKP5TND-kweri

Kik
http://taell.me/kweri-kik

Line
http://taell.me/kweri-line/

Skype
http://taell.me/kweri-skype/

Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm
Winnie

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm