KorrBot in General Chatbots and Software

I've not had much experience with NLP, Chatbots, Json, etc so there is obviously a gap in my experience/ knowledge.

As side project I've took a few hours to write a simple chatbot engine/ parser.

The bot uses sentence templates, a dictionary and mostly simple substitution and searches.

The bot can learn simple Json triples and also apply simple inference rules, so it knows birds can fly because birds have wings and wings can fly, etc

I've got an idea how to add conversation goals, moods and a personality etc...

Example conversation... so far lol

Code: [Select]
U: what is a dog
B: i'm not sure what is a dog

U: what is a cat
B: i'm not sure what is a cat

U: can birds fly
B: i'm not sure

U: can a bird fly
B: no a bird can not fly you dumb ass.

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: i'm not sure

U: does a cat have feet
B: i don't know

U: does korrelan own a car
B: i'm not sure

U: my name is korrelan **Begin Learning**
B: good to know.

U: a bird is a animal
B: good to know... tell me more.

U: a bird has wings
B: ok... what else should i know?

U: korrelan is a human
B: you don't say.

U: korrelan is a male
B: you don't say.

U: a male is a human
B: good to know... tell me more.

U: male is a sex
B: you don't say.

U: wings can fly
B: ok I understand

U: wings can fly
B: ok I understand

U: a human has legs
B: ok... what else should i know?

U: feet can walk
B: ok I understand... tell me more.

U: legs have feet
B: cheers for the information.

U: a dog has legs
B: ok... what else should i know?

U: a cat has legs
B: ok... what else should i know?

U: a dog is a animal
B: good to know... tell me more.

U: a cat is a animal
B: good to know... tell me more.

U: do you know korrelan is a mammal
U: korrelan owns a car
B: sweet... tell me more.

U: a plane has wings **Stop Learning**
B: ok... what else should i know?

U: what is a dog
B: a dog is a animal

U: what is a cat
B: a cat is a animal

U: can birds fly
B: yes birds can fly

U: can a bird fly
B: yes a bird can fly

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: yes a dog does have legs

U: does a cat have feet
B: yes a cat does have feet

U: does korrelan own a car
B: of course korrelan has a car

U: what colour is the car
B: i'm not sure. what colour is the car?

U: the car is green
B: green is a nice colour.

U: what colour is the car
B: the car is green

U: what colour is the sky
B: i'm not sure. what colour is the sky?

U: the sky is blue
B: blue is a nice colour.

U: what colour is the sky
B: the sky is blue

U: can korrelan walk
B: yes korrelan can walk

U: can a dog walk
B: yes a dog can walk

U: can a dog fly
B: no a dog can not fly you dumb ass.

U: do humans have legs
B: yes a human does have legs

U: where is the tv
B: i'm not sure. where is the tv?

U: the tv is in the lounge
B: the lounge is the best place for it.

U: where is the computer
B: i'm not sure. where is the computer?

U: the computer is in the office
B: the office is the best place for it.

U: where is the tv
B: the tv is in the lounge

U: where is the computer
B: the computer is in the office

U: tell me about albert
B: i don't know who albert is

U: albert is a scientist
B: you don't say.

U: tell me about albert
B: albert is a scientist

U: tell me about korrelan
B: korrelan is a human

Haha... loads of fun...

Ed: What idiot invented the English language?... it's so contrived lol.


85 Comments | Started April 13, 2018, 12:00:16 pm


Simple AI Website to let you know what is AI in General AI Discussion

Artificial Intelligence for Everyone

We have a dream… to make everyone enjoy artificial intelligence as easy as savouring a cup of latte.

24 Comments | Started May 10, 2018, 05:11:29 pm


XKCD Comic : GDPR in XKCD Comic

25 May 2018, 5:00 am

By clicking anywhere, scrolling, or closing this notification, you agree to be legally bound by the witch Sycorax within a cloven pine.

Source: xkcd.com

Started May 26, 2018, 12:03:49 pm


Fleet of autonomous boats could service some cities, reducing road traffic in Robotics News

Fleet of autonomous boats could service some cities, reducing road traffic
23 May 2018, 7:00 pm

The future of transportation in waterway-rich cities such as Amsterdam, Bangkok, and Venice — where canals run alongside and under bustling streets and bridges — may include autonomous boats that ferry goods and people, helping clear up road congestion.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab in the Department of Urban Studies and Planning (DUSP), have taken a step toward that future by designing a fleet of autonomous boats that offer high maneuverability and precise control. The boats can also be rapidly 3-D printed using a low-cost printer, making mass manufacturing more feasible.

The boats could be used to taxi people around and to deliver goods, easing street traffic. In the future, the researchers also envision the driverless boats being adapted to perform city services overnight, instead of during busy daylight hours, further reducing congestion on both roads and canals.

“Imagine shifting some of infrastructure services that usually take place during the day on the road — deliveries, garbage management, waste management — to the middle of the night, on the water, using a fleet of autonomous boats,” says CSAIL Director Daniela Rus, co-author on a paper describing the technology that’s being presented at this week’s IEEE International Conference on Robotics and Automation.

Moreover, the boats — rectangular 4-by-2-meter hulls equipped with sensors, microcontrollers, GPS modules, and other hardware — could be programmed to self-assemble into floating bridges, concert stages, platforms for food markets, and other structures in a matter of hours. “Again, some of the activities that are usually taking place on land, and that cause disturbance in how the city moves, can be done on a temporary basis on the water,” says Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science.

The boats could also be equipped with environmental sensors to monitor a city’s waters and gain insight into urban and human health.

Co-authors on the paper are: first author Wei Wang, a joint postdoc in CSAIL and the Senseable City Lab; Luis A. Mateos and Shinkyu Park, both DUSP postdocs; Pietro Leoni, a research fellow, and Fábio Duarte, a research scientist, both in DUSP and the Senseable City Lab; Banti Gheneti, a graduate student in the Department of Electrical Engineering and Computer Science; and Carlo Ratti, a principal investigator and professor of the practice in the DUSP and director of the MIT Senseable City Lab.

Better design and control

The work was conducted as part of the “Roboat” project, a collaboration between the MIT Senseable City Lab and the Amsterdam Institute for Advanced Metropolitan Solutions (AMS). In 2016, as part of the project, the researchers tested a prototype that cruised around the city’s canals, moving forward, backward, and laterally along a preprogrammed path.

The ICRA paper details several important new innovations: a rapid fabrication technique, a more efficient and agile design, and advanced trajectory-tracking algorithms that improve control, precision docking and latching, and other tasks.

To make the boats, the researchers 3-D-printed a rectangular hull with a commercial printer, producing 16 separate sections that were spliced together. Printing took around 60 hours. The completed hull was then sealed by adhering several layers of fiberglass.

Integrated onto the hull are a power supply, Wi-Fi antenna, GPS, and a minicomputer and microcontroller. For precise positioning, the researchers incorporated an indoor ultrasound beacon system and outdoor real-time kinematic GPS modules, which allow for centimeter-level localization, as well as an inertial measurement unit (IMU) module that monitors the boat’s yaw and angular velocity, among other metrics.

The boat is a rectangular shape, instead of the traditional kayak or catamaran shapes, to allow the vessel to move sideways and to attach itself to other boats when assembling other structures. Another simple yet effective design element was thruster placement. Four thrusters are positioned in the center of each side, instead of at the four corners, generating forward and backward forces. This makes the boat more agile and efficient, the researchers say.

The team also developed a method that enables the boat to track its position and orientation more quickly and accurately. To do so, they developed an efficient version of a nonlinear model predictive control (NMPC) algorithm, generally used to control and navigate robots within various constraints.

The NMPC and similar algorithms have been used to control autonomous boats before. But typically those algorithms are tested only in simulation or don’t account for the dynamics of the boat. The researchers instead incorporated in the algorithm simplified nonlinear mathematical models that account for a few known parameters, such as drag of the boat, centrifugal and Coriolis forces, and added mass due to accelerating or decelerating in water. The researchers also used an identification algorithm that then identifies any unknown parameters as the boat is trained on a path.

Finally, the researchers used an efficient predictive-control platform to run their algorithm, which can rapidly determine upcoming actions and increases the algorithm’s speed by two orders of magnitude over similar systems. While other algorithms execute in about 100 milliseconds, the researchers’ algorithm takes less than 1 millisecond.

Testing the waters

To demonstrate the control algorithm’s efficacy, the researchers deployed a smaller prototype of the boat along preplanned paths in a swimming pool and in the Charles River. Over the course of 10 test runs, the researchers observed average tracking errors — in positioning and orientation — smaller than tracking errors of traditional control algorithms.

That accuracy is thanks, in part, to the boat’s onboard GPS and IMU modules, which determine position and direction, respectively, down to the centimeter. The NMPC algorithm crunches the data from those modules and weighs various metrics to steer the boat true. The algorithm is implemented in a controller computer and regulates each thruster individually, updating every 0.2 seconds.

“The controller considers the boat dynamics, current state of the boat, thrust constraints, and reference position for the coming several seconds, to optimize how the boat drives on the path,” Wang says. “We can then find optimal force for the thrusters that can take the boat back to the path and minimize errors.”

The innovations in design and fabrication, as well as faster and more precise control algorithms, point toward feasible driverless boats used for transportation, docking, and self-assembling into platforms, the researchers say.

A next step for the work is developing adaptive controllers to account for changes in mass and drag of the boat when transporting people and goods. The researchers are also refining the controller to account for wave disturbances and stronger currents.

“We actually found that the Charles River has much more current than in the canals in Amsterdam,” Wang says. “But there will be a lot of boats moving around, and big boats will bring big currents, so we still have to consider this.”

The work was supported by a grant from AMS.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started May 25, 2018, 12:06:27 pm


http://www.isaacarthur.net/ in Future of AI

This channel focuses on exploring concepts in science with an emphasis on futurism and space exploration, along with a healthy dose of science fiction. While we explore many concepts and technologies that are far beyond us now, we try to keep everything inside the bounds of known science or major theories.

Started May 25, 2018, 06:40:15 am


Gauging language proficiency through eye movement in Robotics News

Gauging language proficiency through eye movement
23 May 2018, 4:59 am

A study by MIT researchers has uncovered a new way of telling how well people are learning English: tracking their eyes.

That’s right. Using data generated by cameras trained on readers’ eyes, the research team has found that patterns of eye movement — particularly how long  people’s eyes rest on certain words — correlate strongly with performance on standardized tests of English as a second language.

“To a large extent [eye movement] captures linguistic proficiency, as we can measure it against benchmarks of standardized tests,” says Yevgeni Berzak, a postdoc in MIT’s Department of Brain and Cognitive Sciences (BCS) and co-author of a new paper outlining the research. He adds: “The signal of eye movement during reading is very rich and very informative.”

Indeed, the researchers even suggest the new method has potential use as a testing tool. “It has real potential applications,” says Roger Levy, an associate professor in BCS and another of the study’s co-authors.

The paper, “Assessing Language Proficiency from Eye Movements in Reading,” is being published in the Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The authors are Berzak, a postdoc in the Computational Psycholinguistics Group in BCS; Boris Katz, a principal research scientist and head of the InfoLab Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL); and Levy, who also directs the Computational Psycholinguistics Lab in BCS.

The illusion of continuity

The study delves into a phenomenon about reading that we may never notice, no matter how much we read: Our eyes do not move continuously along a string of text, but instead fix on particular words for up to 200 to 250 milliseconds. We also take leaps from one word to another that may last about 1/20 of a second.

“Although you have a subjective experience of a continuous, smooth pass over text, that’s absolutely not what your eyes are doing,” says Levy. “Your eyes are jumping around, mostly forward, sometimes backward. Your mind stitches together a smooth experience. … It’s a testimony to the ability of the mind to create illusions.”

But if you are learning a new language, your eyes may dwell on particular words for longer periods of time, as you try to comprehend the text. The particular pattern of eye movement, for this reason, can reveal a lot about comprehension, at least when analyzed in a clearly defined context.

To conduct the study, the researchers used a dataset of eye movement records from work conducted by Berzak. The dataset has 145 students of English as a second language, divided almost evenly among four native languages — Chinese, Japanese, Portuguese, and Spanish — as well as 37 native English speakers.

The readers were given 156 sentences to read, half of which were part of a “fixed test” in which everyone in the study read the same sentences. The video footage enabled the research team to focus intensively on a series of duration times — the length of time readers were fixated on particular words.

The research team called the set of metrics they used the “EyeScore.” After evaluating how it correlated with the Michigan English Test (MET) and the Test of English as a Foreign Language (TOEFL), they concluded in the paper that the EyeScore method produced “competitive results” with the standardized tests, “further strengthening the evidence for the ability of our approach to capture language proficiency.”

As a result, the authors write, the new method is “the first proof of concept for a system which utilizes eye tracking to measure linguistic ability.”

Sentence by sentence

Other scholars say the study is an interesting addition to the research literature on the subject.

“The method [used in the study] is very innovative and — in my opinion — holds much promise for using eye-tracking technology to its full potential,” says Erik Reichle, head of the Department of Psychology at Macquarie University in Sydney, Australia, who has conducted many experiments about tracking eye movement. Reichle adds that he suspects the paper “will have a big impact in a number of different fields, including those more directly related to second-language learning.”    

As the researchers see it, the current study is just one step on a longer journey of exploration about the interactions of language and cognition.

As Katz says, “The bigger question is, how does language affect your brain?” Given that we only began processing written text within the last several thousand years, he notes, our reading ability is an example of the “amazing plasticity” of the brain. Before too long, he adds, “We could actually be in a position to start answering these questions.”

Levy, for his part, thinks that it may be possible to make these eye tests about reading more specific. Rather than evaluating reader comprehension over a corpus of 156 sentences, as the current study did, experts might be able to render more definitive judgments about even smaller strings of text.

“One thing that we would hope to do in the future that we haven’t done yet, for example, is ask, on a sentence-by-sentence basis, to what extent can we tell how well you understood a sentence by the eye movements you made when you read it,” Levy says. “That’s an open question nobody’s answered. We hope we might be able to do that in the future.”

The study was supported, in part, by MIT’s Center for Brains, Minds, and Machines, through a National Science Foundation grant.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

2 Comments | Started May 24, 2018, 12:00:53 pm


XKCD Comic : Business Update in XKCD Comic

Business Update
23 May 2018, 5:00 am

Our customers keep sending us their personal information, even though we've repeatedly asked them to stop. The EU told me I'm the heir to some ancient European throne that makes me exempt from the GDPR, but we should probably still try to fix that.

Source: xkcd.com

Started May 24, 2018, 12:00:53 pm

ranch vermin

soldering motor attachments in Home Made Robots

This all stems from the fact that i havent got anything to screw onto my motors to chain them together for a body.

My first attempts to make a "tilt+pan kit" of 2 dc motors (connect them at 90*) were pretty dismal,    but the more I did it the stronger they got.

At the moment the way I do it is I solder a blob on each of the motors separate, and at this point they both take quite good hits without budging or coming off the motor.  so im proud of it up to here.

But when I put them in the 90 degree formation, and try to connect them together, this seems to be harder to get more solid.

I have a feeling that the solder has to be very hot for it to form a better bond,  so I get the iron really hot here and then I smoosh over it all very carefully yet quick like a chinese ink artist and thats the best bond ive got so far.

It took a fall test.  (and even the motors themselves fall apart if you drop them.)  But I still can pull them apart with my hands pretty easily.

Ill keep doing it today, and Ill see if I can get a better bond.     Its not over yet.

9 Comments | Started May 23, 2018, 07:55:08 am


Wish Lists in General Project Discussion

Not everyone reading this is working on an artificial intelligence project, but if you are, what is it that you wish for most? What's the next thing that you need that would help you furthest towards your goals?

For my project, I need a comprehensive lexical database for the English language. All my work in recent years has been directed towards producing such a resource. The other thing that I need is a more powerful computer for processing the huge amount of data that's involved. My quad core i7 with 16GB of RAM and 1TB of SSD just isn't powerful enough any more, but that problem is relatively easy to solve.

I know that we all need more time, more money, and more education to advance our projects, but I'm mainly interested in specifics.

9 Comments | Started May 22, 2018, 11:28:09 pm


Making driverless cars change lanes more like human drivers do in Robotics News

Making driverless cars change lanes more like human drivers do
23 May 2018, 4:59 am

In the field of self-driving cars, algorithms for controlling lane changes are an important topic of study. But most existing lane-change algorithms have one of two drawbacks: Either they rely on detailed statistical models of the driving environment, which are difficult to assemble and too complex to analyze on the fly; or they’re so simple that they can lead to impractically conservative decisions, such as never changing lanes at all.

At the International Conference on Robotics and Automation tomorrow, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new lane-change algorithm that splits the difference. It allows for more aggressive lane changes than the simple models do but relies only on immediate information about other vehicles’ directions and velocities to make decisions.

“The motivation is, ‘What can we do with as little information as possible?’” says Alyssa Pierson, a postdoc at CSAIL and first author on the new paper. “How can we have an autonomous vehicle behave as a human driver might behave? What is the minimum amount of information the car needs to elicit that human-like behavior?”

Pierson is joined on the paper by Daniela Rus, the Viterbi Professor of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics; and Wilko Schwarting, a graduate student in electrical engineering and computer science.

“The optimization solution will ensure navigation with lane changes that can model an entire range of driving styles, from conservative to aggressive, with safety guarantees,” says Rus, who is the director of CSAIL.

One standard way for autonomous vehicles to avoid collisions is to calculate buffer zones around the other vehicles in the environment. The buffer zones describe not only the vehicles’ current positions but their likely future positions within some time frame. Planning lane changes then becomes a matter of simply staying out of other vehicles’ buffer zones.

For any given method of computing buffer zones, algorithm designers must prove that it guarantees collision avoidance, within the context of the mathematical model used to describe traffic patterns. That proof can be complex, so the optimal buffer zones are usually computed in advance. During operation, the autonomous vehicle then calls up the precomputed buffer zones that correspond to its situation.

The problem is that if traffic is fast enough and dense enough, precomputed buffer zones may be too restrictive. An autonomous vehicle will fail to change lanes at all, whereas a human driver would cheerfully zip around the roadway.

With the MIT researchers’ system, if the default buffer zones are leading to performance that’s far worse than a human driver’s, the system will compute new buffer zones on the fly — complete with proof of collision avoidance.

That approach depends on a mathematically efficient method of describing buffer zones, so that the collision-avoidance proof can be executed quickly. And that’s what the MIT researchers developed.

They begin with a so-called Gaussian distribution — the familiar bell-curve probability distribution. That distribution represents the current position of the car, factoring in both its length and the uncertainty of its location estimation.

Then, based on estimates of the car’s direction and velocity, the researchers’ system constructs a so-called logistic function. Multiplying the logistic function by the Gaussian distribution skews the distribution in the direction of the car’s movement, with higher speeds increasing the skew.

The skewed distribution defines the vehicle’s new buffer zone. But its mathematical description is so simple — using only a few equation variables — that the system can evaluate it on the fly.

The researchers tested their algorithm in a simulation including up to 16 autonomous cars driving in an environment with several hundred other vehicles.

“The autonomous vehicles were not in direct communication but ran the proposed algorithm in parallel without conflict or collisions,” explains Pierson. “Each car used a different risk threshold that produced a different driving style, allowing us to create conservative and aggressive drivers. Using the static, precomputed buffer zones would only allow for conservative driving, whereas our dynamic algorithm allows for a broader range of driving styles.”

This project was supported, in part, by the Toyota Research Institute and the Office of Naval Research.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.

Started May 23, 2018, 12:00:02 pm
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm
A Guide to Chatbot Architecture

A Guide to Chatbot Architecture in Articles

Humans are always fascinated with self-operating devices and today, it is software called “Chatbots” which are becoming more human-like and are automated. The combination of immediate response and constant connectivity makes them an enticing way to extend or replace the web applications trend. But how do these automated programs work? Let’s have a look.

Mar 13, 2018, 14:47:09 pm
Sing for Fame

Sing for Fame in Chatbots - English

Sing for Fame is a bot that hosts a singing competition. 

Users can show their skills by singing their favorite songs. 

If someone needs inspiration the bot provides suggestions including song lyrics and videos.

The bot then plays it to other users who can rate the song.

Based on the ratings the bot generates a top ten.

Jan 30, 2018, 22:17:57 pm

ConciergeBot in Assistants

A concierge service bot that handles guest requests and FAQs, as well as recommends restaurants and local attractions.

Messenger Link : messenger.com/t/rthhotel

Jan 30, 2018, 22:11:55 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!


FB Messenger






Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm