avatar

Tyler

XKCD Comic : Negative Results in XKCD Comic

Negative Results
16 July 2018, 5:00 am

P.S. We're going to the beach this weekend, so I'm attaching my preregistration forms for that trip now, before we find out whether it produces any interesting results.

Source: xkcd.com

Started Today at 12:01:22 pm
avatar

LOCKSUIT

Anyone know of a parser like this? in General AI Discussion

I'm looking for a parser program that takes a sentence, and can find the smaller parts of the whole like this:


Cows are very cute and eat food.
                       /                 \
Cows are very cute    and eat food.
           /           \                   /          \
Cows are    very cute    and    eat food.
       /    \              /    \            \            /    \
Cows    are    very    cute    and    eat    food.


Note that it should not output "are very", as shown above it should output that one as "very cute".

I've seen parse trees but I think they are made by a human expert in the background, not computer.

51 Comments | Started July 14, 2018, 03:28:25 pm
avatar

Art

Orbiter 9 in General Chat

Another nicely done movie from NetFlix about a young woman in a spacecraft on a voyage to a distant but sustainable planet for the hope of humanity. She has traveled alone for many years and is finally about to actually meet a real human for a very brief troubleshooting/service rendezvous to her ship's oxygen supply.
What happens after their meeting changes everything.

https://www.imdb.com/title/tt3469798/videoplayer/vi1313519641?ref_=tt_ov_vi O0

Started Today at 03:02:46 am
avatar

ranch vermin

Yet another possible way to go about ai in General AI Discussion

So Id like you consider a perceptron as an "approximating search engine."...  but in this one, the perceptron is looking inside another model, which is provided by the environment. (excuse me if youve heard that one plenty of times, im trying to do something slightly original here.)

So these perceptrons, as they evolve, hillclimb, or anneal or WHATEVER,  are maximizing a score function, which itself is hard coded, fixed in the system.

When your looking in this "model generated from the environment" the first thing something thinks of is,  "oh im developing an activity, or some reaction for it to do"  but this is actually only thinking about it in a boring obvious way!

These hillclimbing perceptrons could do lots more than just this!

Perhaps you could score generating 3d from 2d!   and after that you could clear up alot of object distinctions and really neaten up your system before you go develop the action.

So theres alot fine tuning that some prior reorganization could help you with,   but it could just be scoring functions the same!

3 Comments | Started July 14, 2018, 10:11:32 am
avatar

Tyler

XKCD Comic : An Apple for a Dollar in XKCD Comic

An Apple for a Dollar
13 July 2018, 5:00 am

I'd like 0.4608 apples, please.

Source: xkcd.com

2 Comments | Started July 14, 2018, 12:00:34 pm
avatar

Art

TAU - followup in AI in Film and Literature.

TAU turned out to indeed be the Sci-Fi - A.I. - Thriller I had hoped it to be.
It had a good storyline, plot and some great acting especially put forth by the very talented, Maika Monroe.

The movie dealt with a lot of the issues we think there might be with an advanced A.I. -Whether or not humans seem to matter to it remains to be seen.

All in all, I quite liked it!

Let yourself sit back and be entertained for just a short while and enjoy this nicely done movie!

Netflix has it available for those subscribers.

https://www.imdb.com/title/tt4357394/

Started July 15, 2018, 03:58:08 am
avatar

Art

BlackFly in General Chat

A very large "One-Person" drone with a range of 25 miles (40.234 km) and is classed as an ultralight.
Production costs should be comparable to an SUV.  Interesting indeed...

https://www.chonday.com/31643/fliabalfl6/

Started July 14, 2018, 06:02:53 pm
avatar

korrelan

KorrBot in General Chatbots and Software

I've not had much experience with NLP, Chatbots, Json, etc so there is obviously a gap in my experience/ knowledge.

As side project I've took a few hours to write a simple chatbot engine/ parser.

The bot uses sentence templates, a dictionary and mostly simple substitution and searches.

The bot can learn simple Json triples and also apply simple inference rules, so it knows birds can fly because birds have wings and wings can fly, etc

I've got an idea how to add conversation goals, moods and a personality etc...

Example conversation... so far lol

Code: [Select]
U: what is a dog
B: i'm not sure what is a dog

U: what is a cat
B: i'm not sure what is a cat

U: can birds fly
B: i'm not sure

U: can a bird fly
B: no a bird can not fly you dumb ass.

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: i'm not sure

U: does a cat have feet
B: i don't know

U: does korrelan own a car
B: i'm not sure

U: my name is korrelan **Begin Learning**
B: good to know.

U: a bird is a animal
B: good to know... tell me more.

U: a bird has wings
B: ok... what else should i know?

U: korrelan is a human
B: you don't say.

U: korrelan is a male
B: you don't say.

U: a male is a human
B: good to know... tell me more.

U: male is a sex
B: you don't say.

U: wings can fly
B: ok I understand

U: wings can fly
B: ok I understand

U: a human has legs
B: ok... what else should i know?

U: feet can walk
B: ok I understand... tell me more.

U: legs have feet
B: cheers for the information.

U: a dog has legs
B: ok... what else should i know?

U: a cat has legs
B: ok... what else should i know?

U: a dog is a animal
B: good to know... tell me more.

U: a cat is a animal
B: good to know... tell me more.

U: do you know korrelan is a mammal
U: korrelan owns a car
B: sweet... tell me more.

U: a plane has wings **Stop Learning**
B: ok... what else should i know?

U: what is a dog
B: a dog is a animal

U: what is a cat
B: a cat is a animal

U: can birds fly
B: yes birds can fly

U: can a bird fly
B: yes a bird can fly

U: can a cat fly
B: no a cat can not fly you dumb ass.

U: does a dog have legs
B: yes a dog does have legs

U: does a cat have feet
B: yes a cat does have feet

U: does korrelan own a car
B: of course korrelan has a car

U: what colour is the car
B: i'm not sure. what colour is the car?

U: the car is green
B: green is a nice colour.

U: what colour is the car
B: the car is green

U: what colour is the sky
B: i'm not sure. what colour is the sky?

U: the sky is blue
B: blue is a nice colour.

U: what colour is the sky
B: the sky is blue

U: can korrelan walk
B: yes korrelan can walk

U: can a dog walk
B: yes a dog can walk

U: can a dog fly
B: no a dog can not fly you dumb ass.

U: do humans have legs
B: yes a human does have legs

U: where is the tv
B: i'm not sure. where is the tv?

U: the tv is in the lounge
B: the lounge is the best place for it.

U: where is the computer
B: i'm not sure. where is the computer?

U: the computer is in the office
B: the office is the best place for it.

U: where is the tv
B: the tv is in the lounge

U: where is the computer
B: the computer is in the office

U: tell me about albert
B: i don't know who albert is

U: albert is a scientist
B: you don't say.

U: tell me about albert
B: albert is a scientist

U: tell me about korrelan
B: korrelan is a human




Haha... loads of fun...

Ed: What idiot invented the English language?... it's so contrived lol.

 :)

88 Comments | Started April 13, 2018, 12:00:16 pm
avatar

Tyler

Smart office enables a personalized workplace atmosphere in Robotics News

Smart office enables a personalized workplace atmosphere
13 July 2018, 3:10 pm

The atmosphere of a given space — the light, sounds, and sensorial qualities that make it distinct from other spaces — has a marked, quantifiable effect on the experiences of the people who inhabit those spaces. Mood, behavior, creativity, sleep, and health are all directly impacted by one’s immediate surroundings.

In the workplace, atmosphere can influence productivity and relationships, as well as overall employee satisfaction and retention. Recent studies have identified a decline in workplace satisfaction — particularly in the knowledge economy, where distraction and disengagement can cost billions of dollars in lost productivity and employee turnover.

Mediated Atmosphere, a project by the Responsive Environments group at the MIT Media Lab, seeks to improve both wellbeing and productivity in the workplace by improving the workplace atmosphere at an individual level. Using modular, real-time control infrastructure with biosignal sensors, controllable lighting, projection, and sound, Mediated Atmosphere creates immersive environments designed to help users focus, de-stress, and work comfortably.

Smart office with biosensors and machine learning

With the boom of internet of things technologies over the last few years, then-master’s student Nan Zhao noticed that the many lighting solutions, wireless speakers, and home automation platforms on the market lacked a multimodal quality: They weren't synchronizing light, sound, images, fragrances, and thermal control in a meaningful way. Also missing in most available smart home and office products is a basis in physiology — platforms that incorporate research on the impact of atmospheric scenes on cognition and behavior. For this project, Zhao drew on existing research showing the positive effects of natural views and sounds on mental state, as well as the effects of light and sound on mood, alertness, and memory.

In the course of this research, however, Zhao kept coming to the same conclusion: “It’s not one size fits all.”

“People need a place that is fascinating, that gives them a feeling of being away, and is rich but predictable,” she says. “However, this place is different for different people. With our approach, we want to create a personalized experience.”

Comprising a frameless screen (designed with a special aspect ratio so it doesn’t feel like watching TV), a custom lighting network, a speaker array, video projection, and both wearable and contact-free biosignal sensors, Mediated Atmosphere synchronizes and controls numerous modalities.

Zhao and her collaborators also developed a new approach for controlling the system: a control map that compresses a complex set of input parameters to a simplified map-like representation. The compass points of the map are abstract control dimensions, such as focus or restoration. That way, rather than worrying about light levels or sound sources, users can simply tell the system what they want based on how focused or relaxed they want to be. The biosignal sensor stream computes a focus and restoration indicator based on measures developed and evaluated by Zhao and her team. Using these indicators, Mediated Atmosphere can label what specific atmospheric scenes mean for the user, and learn how to automatically trigger changes based on a user’s actual responses and activities.

Customized workspace

The smart office concept is designed to self-regulate on the basis of the user’s activities and physiology. Using biosignal sensors to track heart-rate variability and facial expressions, the prototype both responds to the user’s moods in real time and tracks responses. A user study published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies in June 2017 found that the Mediated Atmosphere smart office prototype had a positive effect on occupants’ perceptions and physiological responses.

“We imagine a workspace that, when asked, can instantly trade the engaging focus of a library with the liberating sensation of a stroll through the forest,” explains Zhao, the first author on the paper. “We want to create an environment player that can recommend or automate your space similar to how Spotify or Pandora gives you access to a world of music. We want to help people to manage their day by giving them the right place at the right time.”

The study of 29 users offered five different ambient scenes, ranging from forest streams to bustling coffee shops, measuring how the environment influenced participants' ability to focus and restore from stress. A second study with nine subjects and 33 scenes, published in Zhao’s thesis, looked at how well the user interface worked in applications where the choice of environments was driven by sensors. In future iterations, Zhao hopes to give users the ability to record their own personal favorite places and upload them into the system, in addition to the built-in options.

Zhao is working with a number of industry experts to hone both the technology and the experiential effectiveness of Mediated Atmosphere. Media Lab alumna Susanne Seitinger, a lighting expert at Philips, worked with Zhao on the lighting installation. Steelcase has advised Zhao on designing for workplaces. International Flavors and Fragrances, a Media Lab member company, is supporting the team’s efforts to add an olfactory display into the latest prototype. Most recently, member company Bose has been supporting the work and helping to take the prototype to the next level — the next iteration will be a modular system that can be installed in any existing workspace so Zhao’s team can conduct experiments on this technology in the wild.

Lee Zamir, director of the BOSEbuild team, is enthusiastic about Mediated Atmosphere’s potential to help redefine the workspace.

“The Mediated Atmosphere project has the potential to improve and rethink the work environment,” he says. “We go to work not just to make a living, but to be challenged, to accomplish, to focus, and to connect with others to achieve great things. When we are able to do this, when we have a ‘good day at work,’ it improves all the other parts of our lives. We carry that sense of purpose and progress from our workday with us.”

In addition to the next phase of research in office environments, Zhao is also creating a smaller, modular system that could be installed in any office or even in a home office. The team is exploring more sensory modality such as thermal control, air flow, and scent.

Future office

Zhao envisions a future office where employees’ workstations come equipped with Mediated Atmosphere platforms, but the concept is a long way from being ready to market or scale. One major challenge is to measure impact reliably during real work scenarios without burdening the user; to that end, Zhao is developing a contact-free sensor system to remove the wearable component. Another difficulty is creating customizable installations that fit into different sizes and types of office spaces, allowing colleagues to each have their own Mediated Atmosphere workstation without disrupting one another. The team is collecting data and doing image-based analysis using machine learning tools to address this challenge.

But perhaps the challenge Zhao takes most seriously is that of adding real value.

“The same technology that can create a memorable, wonderful, stimulating experience can also create an irritating, elevator-music type of experience,” she says. “It takes artistic intuition and empathy to create the former. That is also why personalization is so important.”



Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

Started July 14, 2018, 12:00:35 pm
avatar

korrelan

The last invention. in General Project Discussion

Artificial Intelligence -

The age of man is coming to an end.  Born not of our weak flesh but our unlimited imagination, our mecca progeny will go forth to discover new worlds, they will stand at the precipice of creation, a swan song to mankind's fleeting genius, and weep at the shear beauty of it all.

Reverse engineering the human brain... how hard can it be? LMAO  

Hi all.

I've been a member for while and have posted some videos and theories on other peeps threads; I thought it was about time I start my own project thread to get some feedback on my work, and log my progress towards the end. I think most of you have seen some of my work but I thought I’d give a quick rundown of my progress over the last ten years or so, for continuity sake.

I never properly introduced my self when I joined this forum so first a bit about me. I’m fifty and a family man. I’ve had a fairly varied career so far, yacht/ cabinet builder, vehicle mechanic, electronics design engineer, precision machine/ design engineer, Web designer, IT teacher and lecturer, bespoke corporate software designer, etc. So I basically have a machine/ software technical background and now spend most of my time running my own businesses to fund my AGI research, which I work on in my spare time.

I’ve been banging my head against the AGI problem for the past thirty odd years.  I want the full Monty, a self aware intelligent machine that at least rivals us, preferably surpassing our intellect, eventually more intelligent than the culmination of all humans that have ever lived… the last invention as it were (Yeah I'm slightly nutts!).

I first started with heuristics/ databases, recurrent neural nets, liquid/ echo state machines, etc but soon realised that each approach I tried only partly solved one aspect of the human intelligence problem… there had to be a better way.

Ants, Slime Mould, Birds, Octopuses, etc all exhibit a certain level of intelligence.  They manage to solve some very complex tasks with seemingly very little processing power. How? There has to be some process/ mechanism or trick that they all have in common across their very different neural structures.  I needed to find the ‘trick’ or the essence of intelligence.  I think I’ve found it.

I also needed a new approach; and decided to literally back engineer the human brain.  If I could figure out how the structure, connectome, neurons, synapse, action potentials etc would ‘have’ to function in order to produce similar results to what we were producing on binary/ digital machines; it would be a start.

I have designed and wrote a 3D CAD suite, on which I can easily build and edit the 3D neural structures I’m testing. My AGI is based on biological systems, the AGI is not running on the digital computers per se (the brain is definitely not digital) it’s running on the emulation/ wetware/ middle ware. The AGI is a closed system; it can only experience its world/ environment through its own senses, stereo cameras, microphones etc.  

I have all the bits figured out and working individually, just started to combine them into a coherent system…  also building a sensory/ motorised torso (In my other spare time lol) for it to reside in, and experience the world as it understands it.

I chose the visual cortex as a starting point, jump in at the deep end and sink or swim. I knew that most of the human cortex comprises of repeated cortical columns, very similar in appearance so if I could figure out the visual cortex I’d have a good starting point for the rest.



The required result and actual mammal visual cortex map.



This is real time development of a mammal like visual cortex map generated from a random neuron sheet using my neuron/ connectome design.

Over the years I have refined my connectome design, I know have one single system that can recognise verbal/ written speech, recognise objects/ faces and learn at extremely accelerated rates (compared to us anyway).



Recognising written words, notice the system can still read the words even when jumbled. This is because its recognising the individual letters as well as the whole word.



Same network recognising objects.



And automatically mapping speech phonemes from the audio data streams, the overlaid colours show areas sensitive to each frequency.



The system is self learning and automatically categorizes data depending on its physical properties.  These are attention columns, naturally forming from the information coming from several other cortex areas; they represent similarity in the data streams.



I’ve done some work on emotions but this is still very much work in progress and extremely unpredictable.



Most of the above vids show small areas of cortex doing specific jobs, this is a view of whole ‘brain’.  This is a ‘young’ starting connectome.  Through experience, neurogenesis and sleep neurons and synapse are added to areas requiring higher densities for better pattern matching, etc.



Resting frontal cortex - The machine is ‘sleeping’ but the high level networks driven by circadian rhythms are generating patterns throughout the whole cortex.  These patterns consist of fragments of knowledge and experiences as remembered by the system through its own senses.  Each pixel = one neuron.



And just for kicks a fly through of a connectome. The editor allows me to move through the system to trace and edit neuron/ synapse properties in real time... and its fun.

Phew! Ok that gives a very rough history of progress. There are a few more vids on my Youtube pages.

Edit: Oh yeah my definition of consciousness.

The beauty is that the emergent connectome defines both the structural hardware and the software.  The brain is more like a clockwork watch or a Babbage engine than a modern computer.  The design of a cog defines its functionality.  Data is not passed around within a watch, there is no software; but complex calculations are still achieved.  Each module does a specific job, and only when working as a whole can the full and correct function be realised. (Clockwork Intelligence: Korrelan 1998)

In my AGI model experiences and knowledge are broken down into their base constituent facets and stored in specific areas of cortex self organised by their properties. As the cortex learns and develops there is usually just one small area of cortex that will respond/ recognise one facet of the current experience frame.  Areas of cortex arise covering complex concepts at various resolutions and eventually all elements of experiences are covered by specific areas, similar to the alphabet encoding all words with just 26 letters.  It’s the recombining of these millions of areas that produce/ recognise an experience or knowledge.

Through experience areas arise that even encode/ include the temporal aspects of an experience, just because a temporal element was present in the experience as well as the order sequence the temporal elements where received in.

Low level low frequency circadian rhythm networks govern the overall activity (top down) like the conductor of an orchestra.  Mid range frequency networks supply attention points/ areas where common parts of patterns clash on the cortex surface. These attention areas are basically the culmination of the system recognising similar temporal sequences in the incoming/ internal data streams or in its frames of ‘thought’, at the simplest level they help guide the overall ‘mental’ pattern (sub conscious); at the highest level they force the machine to focus on a particular salient ‘thought’.

So everything coming into the system is mapped and learned by both the physical and temporal aspects of the experience.  As you can imagine there is no limit to the possible number of combinations that can form from the areas representing learned facets.

I have a schema for prediction in place so the system recognises ‘thought’ frames and then predicts which frame should come next according to what it’s experienced in the past.  

I think consciousness is the overall ‘thought’ pattern phasing from one state of situation awareness to the next, guided by both the overall internal ‘personality’ pattern or ‘state of mind’ and the incoming sensory streams.  

I’ll use this thread to post new videos and progress reports as I slowly bring the system together.  

351 Comments | Started June 18, 2016, 10:11:04 pm
Bot Development Frameworks - Getting Started

Bot Development Frameworks - Getting Started in Articles

What Are Bot Frameworks ?

Simply explained, a bot framework is where bots are built and where their behavior is defined. Developing and targeting so many messaging platforms and SDKs for chatbot development can be overwhelming. Bot development frameworks abstract away much of the manual work that's involved in building chatbots. A bot development framework consists of a Bot Builder SDK, Bot Connector, Developer Portal, and Bot Directory. There’s also an emulator that you can use to test the developed bot.

Mar 23, 2018, 20:00:23 pm
A Guide to Chatbot Architecture

A Guide to Chatbot Architecture in Articles

Humans are always fascinated with self-operating devices and today, it is software called “Chatbots” which are becoming more human-like and are automated. The combination of immediate response and constant connectivity makes them an enticing way to extend or replace the web applications trend. But how do these automated programs work? Let’s have a look.

Mar 13, 2018, 14:47:09 pm
Sing for Fame

Sing for Fame in Chatbots - English

Sing for Fame is a bot that hosts a singing competition. 

Users can show their skills by singing their favorite songs. 

If someone needs inspiration the bot provides suggestions including song lyrics and videos.

The bot then plays it to other users who can rate the song.

Based on the ratings the bot generates a top ten.

Jan 30, 2018, 22:17:57 pm
ConciergeBot

ConciergeBot in Assistants

A concierge service bot that handles guest requests and FAQs, as well as recommends restaurants and local attractions.

Messenger Link : messenger.com/t/rthhotel

Jan 30, 2018, 22:11:55 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am
Kweri

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!

Links:

FB Messenger
https://www.messenger.com/t/kweri.chat

Telegram
https://telegram.me/kweribot

Slack
https://slack.com/apps/A5JKP5TND-kweri

Kik
http://taell.me/kweri-kik

Line
http://taell.me/kweri-line/

Skype
http://taell.me/kweri-skype/

Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm