Sabtu, 13 Juni 2009

Welcome to 20 Roboholic Community

3 komentar




Roboholic news!!!

Bagi tim 20 roboholic dapat mengakses indo-code.com untuk mempelajari dasar2 bahasa pmrogrAman C/C++


Human-like Vision Lets Robots Navigate Naturally

ScienceDaily (July 17, 2009) — A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.


An inside view of VisGuide with the electronic circuits on main board. The video signals are sent via cables to a light- weight micro-PC that is carried for the user. (Credit: Decisions in Motion Project (www.decisionsinmotion.org))

It’s something any toddler can do – cross a cluttered room to find a toy.

It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.

Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.

In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.

The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.

“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”

How do we see movement?

The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.

These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.

The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.

One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.

“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”

Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.

“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”

Building a better robotic brain

Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.

They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.

“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”

The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.

Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.

”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”

In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.

Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.

Robo-bats With Metal Muscles May Be Next Generation Of Remote Control Flyers

ScienceDaily (July 8, 2009) — Tiny flying machines can be used for everything from indoor surveillance to exploring collapsed buildings, but simply making smaller versions of planes and helicopters doesn't work very well. Instead, researchers at North Carolina State University are mimicking nature's small flyers – and developing robotic bats that offer increased maneuverability and performance.


Small flyers, or micro-aerial vehicles (MAVs), have garnered a great deal of interest due to their potential applications where maneuverability in tight spaces is necessary, says researcher Gheorghe Bunget. For example, Bunget says, "due to the availability of small sensors, MAVs can be used for detection missions of biological, chemical and nuclear agents." But, due to their size, devices using a traditional fixed-wing or rotary-wing design have low maneuverability and aerodynamic efficiency.

So Bunget, a doctoral student in mechanical engineering at NC State, and his advisor Dr. Stefan Seelecke looked to nature. "We are trying to mimic nature as closely as possible," Seelecke says, "because it is very efficient. And, at the MAV scale, nature tells us that flapping flight – like that of the bat – is the most effective."

The researchers did extensive analysis of bats' skeletal and muscular systems before developing a "robo-bat" skeleton using rapid prototyping technologies. The fully assembled skeleton rests easily in the palm of your hand and, at less than 6 grams, feels as light as a feather. The researchers are currently completing fabrication and assembly of the joints, muscular system and wing membrane for the robo-bat, which should allow it to fly with the same efficient flapping motion used by real bats.

"The key concept here is the use of smart materials," Seelecke says. "We are using a shape-memory metal alloy that is super-elastic for the joints. The material provides a full range of motion, but will always return to its original position – a function performed by many tiny bones, cartilage and tendons in real bats."

Seelecke explains that the research team is also using smart materials for the muscular system. "We're using an alloy that responds to the heat from an electric current. That heat actuates micro-scale wires the size of a human hair, making them contract like 'metal muscles.' During the contraction, the powerful muscle wires also change their electric resistance, which can be easily measured, thus providing simultaneous action and sensory input. This dual functionality will help cut down on the robo-bat's weight, and allow the robot to respond quickly to changing conditions – such as a gust of wind – as perfectly as a real bat."

In addition to creating a surveillance tool with very real practical applications, Seelecke says the robo-bat could also help expand our understanding of aerodynamics. "It will allow us to do tests where we can control all of the variables – and finally give us the opportunity to fully understand the aerodynamics of flapping flight," Seelecke says.

Bunget will present the research this September at the American Society of Mechanical Engineers Conference on Smart Materials, Adaptive Structures and Intelligent Systems in Oxnard, Calif.

Researchers Unveil Whiskered Robot Rat

ScienceDaily (July 5, 2009) — A team of scientists have developed an innovative robot rat which can seek out and identify objects using its whiskers. The SCRATCHbot robot will be demonstrated this week (1 July 2009) at an international workshop looking at how robots can help us examine the workings of the brain.


Researchers from the Bristol Robotics Lab, (a partnership between the University of the West of England and the University of Bristol) and the University of Sheffield have developed the SCRATCHbot, which is a significant milestone in the pan-european “ICEA” project to develop biologically-inspired artificial intelligence systems. As part of this project Professor Tony Prescott, from the University of Sheffield’s Department of Psychology, is working with the Bristol Robotics Lab to design innovative artificial touch technologies for robots that will also help us understand how the brain controls the movement of the sensory systems.

The new technology has been inspired by the use of touch in the animal kingdom. In nocturnal creatures, or those that inhabit poorly-lit places, this physical sense is widely preferred to vision as a primary means of discovering the world. Rats are especially effective at exploring their environments using their whiskers. They are able to accurately determine the position, shape and texture of objects using precise rhythmic sweeping movements of their whiskers, make rapid accurate decisions about objects, and then use the information to build environmental maps.

Robot designs often rely on vision to identify objects, but this new technology relies solely on sophisticated touch technology, enabling the robot to function in spaces such as dark or smoke-filled rooms, where vision cannot be used.

The new technology has the potential for a number of further applications from using robots underground, under the sea, or in extremely dusty conditions, where vision is often seriously compromised. The technology could also be used for tactile inspection of surfaces, such as materials in the textile industry, or closer to home in domestic products, for example vacuum cleaners that could sense textures for optimal cleaning.

Dr Tony Pipe, (BRL, UWE), says “For a long time, vision has been the biological sensory modality most studied by scientists. But active touch sensing is a key focus for those of us looking at biological systems which have implications for robotics research. Sensory systems such as rats’ whiskers have some particular advantages in this area. In humans, for example, where sensors are at the fingertips, they are more vulnerable to damage and injury than whiskers. Rats have the ability to operate with damaged whiskers and in theory broken whiskers on robots could be easily replaced, without affecting the whole robot and its expensive engineering.

“Future applications for this technology could include using robots underground, under the sea, or in extremely dusty conditions, where vision is often a seriously compromised sensory modality. Here, whisker technology could be used to sense objects and manoeuvre in a difficult environment. In a smoke filled room for example, a robot like this could help with a rescue operation by locating survivors of a fire. This research builds on previous work we have done on whisker sensing.”

Professor Prescott said: “Our project has reached a significant milestone in the development of actively-controlled, whisker-like sensors for intelligent machines. Although touch sensors are already employed in robots, the use of touch as a principal modality has been overlooked until now. By developing these biomimetic robots, we are not just designing novel touch-sensing devices, but also making a real contribution to understanding the biology of tactile sensing.”

Robot Learns To Smile And Frown

ScienceDaily (July 11, 2009) — A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.


“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.

The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.

This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.

Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.

“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.

Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.

To begin the learning process, the UC San Diego researchers directed the Einstein robot head (Hanson Robotics’ Einstein Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.

Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.

For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.

“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.

“Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” said Wu, the computer science PhD student. Wu also noted that the “body babbling” approach he and his colleagues described in their paper may not be the most efficient way to explore the model of the face.

While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.

Learning to Make Facial Expressions,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8th International Conference On Development And Learning.


Robot Soccer: Cooperative Soccer Playing Robots Compete

ScienceDaily (July 6, 2009) — The cooperative soccer playing robots of the Universität Stuttgart are world champions in the middle size league of robot soccer. After one of the most interesting competitions in the history of Robocup from 29th June to 5th July, 2009, in Graz, the 1. RFC Stuttgart on the last day of the competition succeeded in winning the world championship 2009 in an exciting game against the team of Tech United from Eindhoven (The Netherlands) with the final result of 4:1.


During the competition Stuttgart's robots had to make their way against 13 other teams from eight countries, among them the current world champion Cambada (Portugal). Besides the teams from Germany, Italy, The Netherlands, Portugal, and Austria, teams from China, Japan, and Iran competed against each other.

The 1.RFC Stuttgart includes staff of two Institutes, namely the department of Image Understanding (Head: Prof. Levi) of the Institute of Parallel and Distributed Systems and the Institute of Technical Optics (Head: Prof. Osten), achieved also the 2nd place at the so-called "technical challenge" and a further 1st place at the "scientific challenge".

After the final match of the competition, the middle-size league robots of the 1. RFC Stuttgart - the new world champion - had to play against the human officials of the RoboCup federation. It turned out, that hereby the robots were the inferior team. Clearly the RoboCup community has still to bridge a vast distance to reach their final goal to let a humanoid robot team play against the human world champion by the year 2050.

The success tells its own tale but one might wonder which scientific interest is behind the RoboCup competitions. Preconditions for the successful participation at these competitions are extensive efforts in current research topics of computer science such as real-time image processing and architectures, cooperative robotics and distributed planning. Possible application scenarios of these research activities reach from autonomous vehicles, cooperative manufacturing robotics, service robotics to the point of planetary or deep-sea exploration by autonomous robotic systems. In this context autonomous means that no or only a limited human intervention is necessary.

Robotic fish: the latest weapon in the fight against water pollution

Robotic Fish
To read a transcript or hear an interview with Research Scientist, Luke Speller and Bruce Gellerman of Living on Earth on the SHOAL, robotic fish project, click here.

Robotic fish, developed by UK scientists, are to be released into the sea for the first time to detect pollution.

The carp-shaped robots will be let loose in the port of Gijon in northern Spain as part of a three-year research project funded by the European Commission and co-ordinated by BMT Group Ltd, an independent engineering and risk management consultancy.

If successful, the team hopes that the fish will used in rivers, lakes and seas across the world, including Britain, to detect pollution.

The life-like creatures, which will mimic the undulating movement of real fish, will be equipped with tiny chemical sensors to find the source of potentially hazardous pollutants in the water, such as leaks from vessels in the port or underwater pipelines.

Fish Sensor DiagramThe fish will communicate with each other using ultrasonics and information will be transmitted to the port's control centre via WiFi from the "charging hub" where the fish can charge their batteries. This will enable the authorities to map in real time the source and scale of the pollution (see attached graphic).

Unlike previous robotic fish that work with remote controls, these will have autonomous navigation capabilities, enabling them to swim independently around the port without any human interaction. This will also enable them to return automatically to their hub to be recharged when battery life (approximately eight hours) is low.

Rory Doyle, senior research scientist at BMT Group, described the project as a "world first", adding that scientists involved in designing the fish were using "cutting-edge" methods to detect and reduce water pollution.

"While using shoals of robotic fish for pollution detection in harbours might appear like something straight out of science fiction, there are very practical reasons for choosing this form," he said.

"In using robotic fish we are building on a design created by hundreds of millions of years' worth of evolution which is incredibly energy efficient. This efficiency is something we need to ensure that our pollution detection sensors can navigate in the underwater environment for hours on end."

He added: "We will produce a system that allows the fish to search underwater, meaning that we will be able to analyse not only chemicals on the surface of the water (e.g. oil) but also those that are dissolved in the water."

The five fish are being built by Professor Huosheng Hu and his robotics team at the School of Computer Science and Electronic Engineering, University of Essex. He hopes to release them into the water by the end of next year.

The fish, which cost around £20,000 to make, will measure 1.5 metres (1.6 yards) in length (roughly the size of a seal) and swim at a maximum speed of about one metre (1.1 yards) per second.

He said: "I am incredibly excited about this project. We are designing these fish very carefully to ensure that they will be able to detect changes in environmental conditions in the port and pick up on early signs of pollution spreading, for example by locating a small leak in a vessel.

"The hope is that this will prevent potentially hazardous discharges at sea, as the leak would undoubtedly get worse over time if not located."

See how the robotic fish move on video: http://www.youtube.com/watch?v=gSibkb6aKHM

Disaster Setting At The RoboCup 2009: Flight And Rescue Robots Demonstrated Their Abilities

ScienceDaily (July 3, 2009) — Modern robotics can help where it is too dangerous for humans to venture. Search and rescue robots (S&R robots) have meanwhile become so sophisticated that they have already carried out their first missions in disasters. And for this reason rescue robots will be given a special place at the RoboCup 2009 – the robotics world championships in Graz.



The flight robot in action

The rescue robotics programme provided exciting rescue demonstrations in which two complex disaster scenarios formed the setting for the robots’ performances. An accident involving a passenger car loaded with hazardous materials and a fire on the rooftop of Graz Stadthalle were the two challenges that flight and rescue robots faced on their remote controlled missions. Smoke and flames made the sets as realistic as possible, ensuring a high level of thrills.

Blazing flames on the eighth floor of a skyscraper means that the reconnaissance and search for injured would already be life threatening for fire services. A remote controlled flight robot can help by reconnoitering the situation and sending information by video signals to the rescue services on the ground. As the robotics world championships, the RoboCup recognised the possible uses of rescue robots a long time ago and promoted their development in the separate category “RoboCup Rescue”. RoboCup 2009, organised by TU Graz, dedicates one particular focus to the lifesaving robots with a rescue robot demonstration, a practical course for first responders and a workshop for the exchange of experiences between rescue services and robotics researchers.

A burning rooftop and hazardous materials

Fire and smoke were seen in front of the Graz Stadthalle on Thursday 2nd July 2009, and yet there was no cause for panic – rescue robots were in action. To demonstrate the capabilities of flight and rescue robots, two disaster scenarios were re-enacted as realistically as possible. A crashed automobile loaded with hazardous materials provided a challenge to the rescue robot. Operated by rescuers by remote control, the metal helper named “Telemax” had to retrieve the sensitive substances and bring them out of the danger zone. The flight robot had to find a victim on the rooftop of the Stadthalle und send information in the form of video signals.

Emergency services meet their future helpers

There is an introduction to possible applications of today’s rescue robotics together with a practical course specially for first responders. In the training courses on 3rd and 4th July from 8 to 10am, search and rescue services from the whole world over can practise operating flight robots, go on a reconnaissance mission in a specially designed rescue area with rescue robots or practise various manipulation tasks and recover hazardous materials or retrieve injured persons using remote controlled robots.

A workshop on the topic of rescue robotics will take place following the RoboCup on the 6th July 2009 at TU Graz. The focus will be on an exchange of experiences between first responders and robotics researchers.

Metal Storm Robot Weapon Fills the Air With Lead, Shooting Anything That Moves

Here's a fearsome weapon from Australia that's not brand new but is now being considered for deployment by the US military: a kick-ass machine gun called Metal Storm. It's an Area Denial Weapon System (ADWS) that literally fills the air with lead without the need of a human operator.

The scariest part is that when you set this monster on auto, it can automagically fire 6000 rounds per minute at anything that moves. In this video after the jump, firing so many rounds so quickly sounds like a quick bark of a buzz saw. Imagine one of these in each hand of our upcoming robot overlords:

<p><br>
<object width="520" height="425"><param name="movie" value="http://www.youtube.com/v/HyAjzowYP1o">
<param name="wmode" value="transparent">
<embed src="http://www.youtube.com/v/HyAjzowYP1o" type="application/x-shockwave-flash" wmode="transparent" width="520" height="425">
Senin, 14 April 2008 | 13:59 WIB
JAKARTA, SENIN - Para insinyur badan antariksa AS (NASA) telah menentukan jalur pendaratan ke permukaan Planet Mars. Wahana penjelajah (rover) bernama Phoenix Mars Lander diharapkan memulai debutnya sebagai petulang planet merah tersebut sejak mendarat pada 25 Mei 2008.

Lokasi pendaratan dipilih dekat kutub utara Mars, tepatnya di kawasan lembah yang luas bernama Green Valley. Namun, keputusan akhir di titik mana rover tersebut akan mendarat baru akan diputuskan setelah foto-foto area sekitarnya selesai dianalisis. Satelit Mars Reconnaisance Orbiter (MRO) yang saat ini mengorbit planet tersebut telah memotret lebih dari 30 foto lembah tersebut dalam resolusi tinggi menggunakan kamera High Resolution Imaging Science Experiment (HiRISE) dan akan terus mengambil gambar lainnya untuk mendukung analisis.

Area pendaratan yang dituju adalah kawasan berbentuk elips yang bergaris tengah 100 kilometer dan 20 kilometer. Para ilmuwan NASA telah memetakan lebih dari lima juta batuan di kawasan tersebut untuk mencegah risiko kecelakaan tabrakan saat Phoenix mendarat.

"Lokasi pendaran kami mengandung konsentrasi es tertinggi di Mars selain kutub utara. Jika Anda ingi mencari tempat tinggal seperti di permafrost Arktik, kawasan ini yang harus didatangi," ujar Peter Smith, peneliti utama misi tersebut dari Universtas Arizona, AS. Phoenix memang dikirim dengan misi utama menggali lapisan tanah yang kaya es. Wahana tersebut juga dilengkapi alat untuk mengenalisis sampel tanah dan air yang ditemukan untuk mencari bukti-bukti perubahan iklim dan kemungkinan adanya kehidupan mikroba.

Dalam rencana ini, Phoenix akan berputar 145 derajat terhadap bidang datar sebelum menukik ke permukaan Mars. Kemudian, mesin pendorongnya akan dinyalakan selama sekitar 35 detik. Manuver pertama yang akan dilakukan wahana tersebut sepanjang proses pendaratan adalah mengarahkan antena ke Bumi.

Dalam tujuh menit terakhir, Phoenix harus mengurangi kecepatannya dari sekitar 21.000 kilometer. Caranya dengan mengembangkan parasit dan menyalakan pendorong berlawanan dengan arah gerakannya sejak berada pada ketinggian 914 meter. Saat menyentuh permukaan Mars dengan tiga kakinya kecepatannya diperkirakan hanya 8 kilometer perjam.

"Mendarat di Mars sungguh sangat menantang. Faktanya, baru tahun 1970-an kami bisa sukses melakukannya di planet ini. Tetap tak ada jaminan berhasil, namun kami melakukan segala sesuat yang mengurangi risiko kegagalan," ujar Doug McCuistion, direktur Program Eksplorasi Mars Nasa di Washington. (NASA/WAH)

Roboholic in news paper


Inilah PRESTASI yang telah kami hasilkan dalam komunitas robot 20


JUARA II LOMBA AKSI ROBOT LINE TRACER PIMITS KE-12 TINGKAT SMA JATIM





Usaha kami bukan hanya bicara tapi tindakan yang kritis, dinamis, serta optimis




Roboholic news >>>>

Predictive Powers: A Robot That Reads Your Intention?

ScienceDaily (June 10, 2009) — European researchers in robotics, psychology and cognitive sciences have developed a robot that can predict the intentions of its human partner. This ability to anticipate (or question) actions could make human-robot interactions more natural.


http://www.sciencedaily.com/images/2009/06/090605075302-large.jpg
Joint toy-making activity between robot and man. (Credit: Copyright JAST)

The walking, talking, thinking robots of science fiction are far removed from the automated machines of today. Even today's most intelligent robots are little more than slaves – programmed to do our bidding.

Many research groups are trying to build robots that could be less like workers and more like companions. But to play this role, they must be able to interact with people in natural ways, and play a pro-active part in joint tasks and decision-making. We need robots that can ask questions, discuss and explore possibilities, assess their companion's ideas and anticipate what their partners might do next.

The EU-funded JAST project (http://www.euprojects-jast.net/) brings a multidisciplinary team together to do just this. The project explores ways by which a robot can anticipate/predict the actions and intentions of a human partner as they work collaboratively on a task.

Who knows best?

You cannot make human-robot interaction more natural unless you understand what 'natural' actually means. But few studies have investigated the cognitive mechanisms that are the basis of joint activity (i.e. where two people are working together to achieve a common goal).

A major element of the JAST project, therefore, was to conduct studies of human-human collaboration. These experiments and observations could feed into the development of more natural robotic behaviour.

The researchers participating in JAST are at the forefront of their discipline and have made some significant discoveries about the cognitive processes involved in joint action and decision-making. Most importantly, they scrutinised the ways in which observation plays an important part in joint activity.

Scientists have already shown that a set of 'mirror neurons' are activated when people observe an activity. These neurons resonate as if they were mimicking the activity; the brain learns about an activity by effectively copying what is going on. In the JAST project, a similar resonance was discovered during joint tasks: people observe their partners and the brain copies their action to try and make sense of it.

In other words, the brain processes the observed actions (and errors, it turns out) as if it is doing them itself. The brain mirrors what the other person is doing either for motor-simulation purposes or to select the most adequate complementary action.

Resonant robotics

The JAST robotics partners have built a system that incorporates this capacity for observation and mirroring (resonance).

“In our experiments the robot is not observing to learn a task,” explains Wolfram Erlhagen from the University of Minho and one of the project consortium's research partners. “The JAST robots already know the task, but they observe behaviour, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure.”

The robot was tested in a variety of settings. In one scenario, the robot was the 'teacher' – guiding and collaborating with human partners to build a complicated model toy. In another test, the robot and the human were on equal terms. “Our tests were to see whether the human and robot could coordinate their work,” Erlhagen continues. “Would the robot know what to do next without being told?”

By observing how its human partner grasped a tool or model part, for example, the robot was able to predict how its partner intended to use it. Clues like these helped the robot to anticipate what its partner might need next. “Anticipation permits fluid interaction,” says Erlhagen. “The robot does not have to see the outcome of the action before it is able to select the next item.”

The robots were also programmed to deal with suspected errors and seek clarification when their partners’ intentions were ambiguous. For example, if one piece could be used to build three different structures, the robot had to ask which object its partner had in mind.

From JAST to Jeeves

But how is the JAST system different to other experimental robots?

“Our robot has a neural architecture that mimics the resonance processing that our human studies showed take place during joint actions,” says Erlhagen. “The link between the human psychology, experimentation and the robotics is very close. Joint action has not been addressed by other robotics projects, which may have developed ways to predict motor movements, but not decisions or intentions. JAST deals with prediction at a much higher cognitive level.”

Before robots like this one can be let loose around humans, however, they will have to learn some manners. Humans know how to behave according to the context they are in. This is subtle and would be difficult for a robot to understand.

Nevertheless, by refining this ability to anticipate, it should be possible to produce robots that are proactive in what they do.

Not waiting to be asked, perhaps one day a robot may use the JAST approach to take initiative and ask: “Would you care for a cup of tea?”

The JAST project received funding from the ICT strand of the EU’s Sixth Framework Programme for research.



Robotic Therapy Holds Promise For Cerebral Palsy

ScienceDaily (May 21, 2009) — Over the past few years, MIT engineers have successfully tested robotic devices to help stroke patients learn to control their arms and legs. Now, they're building on that work to help children with cerebral palsy.

http://www.sciencedaily.com/images/2009/05/090520161335-large.jpg

A young patient tests out an MIT-developed robotic therapy device at Blythedale Children's Hospital in Westchester County, N.Y. (Credit: Photo / Peter Lang)

"Robotic therapy can potentially help reduce impairment and facilitate neuro-development of youngsters with cerebral palsy," says Hermano Igo Krebs, principal research scientist in mechanical engineering and one of the project's leaders.

Krebs and others at MIT, including professor of mechanical engineering Neville Hogan, pioneered the use of robotic therapy in the late 1980s, and since then the field has taken off.

"We started with stroke because it's the biggest elephant in the room, and then started to build it out to other areas, including cerebral palsy as well as multiple sclerosis, Parkinson's disease and spinal cord injury," says Krebs.

The team's suite of robots for shoulder-and-elbow, wrist, hand and ankle has been in clinical trials for more than 15 years with more than 400 stroke patients. The Department of Veterans Affairs has just completed a large-scale, randomized, multi-site clinical study with these devices.

All the devices are based on the same principle: that it is possible to help rebuild brain connections using robotic devices that gently guide the limb as a patient tries to make a specific movement.

When the researchers first decided to apply their work to children with cerebral palsy, Krebs was optimistic that it would succeed, because children's developing brains are more plastic than adults', meaning they are more able to establish new connections.

The MIT team is focusing on improving cerebral palsy patients' ability to reach for and grasp objects. Patients handshake with the robot via a handle, which is connected to a computer monitor that displays tasks similar to those of simple video games.

In a typical task, the youngster attempts to move the robot handle toward a moving or stationary target shown on the computer monitor. If the child starts moving in the wrong direction or does not move, the robotic arm gently nudges the child's arm in the right direction.

Krebs began working in robotic therapy as a graduate student at MIT almost 20 years ago. In his early studies, he and his colleagues found that it's important for stroke patients to make a conscious effort during physical therapy. When signals from the brain are paired with assisted movement from the robot, it helps the brain form new connections that help it relearn to move the limb on its own.

Even though a stroke kills many neurons, "the remaining neurons can very quickly establish new synapses or reinforce dormant synapses," says Krebs.

For this type of therapy to be effective, many repetitions are required — at least 400 in an hour-long session.

Results from three published pilot studies involving 36 children suggest that cerebral palsy patients can also benefit from robotic therapy. The studies indicate that robot-mediated therapy helped the children reduce impairment and improve the smoothness and speed of their reaching motions.

The researchers applied their work to stroke patients first because it is such a widespread problem — about 800,000 people suffer strokes in the United States every year. About 10,000 babies develop cerebral palsy in the United States each year, but there is more potential for long-term benefit for children with cerebral palsy.

"In the long run, people that have a stroke, if they are 70 or 80 years old, might stay with us for an average of 5 or 6 years after the stroke," says Krebs. "In the case of cerebral palsy, there is a whole life."

Most of the clinical work testing the device with cerebral palsy patients has been done at Blythedale Children's Hospital in Westchester County, N.Y., and Spaulding Rehabilitation Hospital in Boston. Other hospitals around the country and abroad are also testing various MIT-developed robotic therapy devices.

Krebs' team has focused first on robotic devices to help cerebral palsy patients with upper body therapy, but they have also initiated a project to design a pediatric robot for the ankle.

Among Krebs' and Hogan's collaborators on the cerebral palsy work are Dr. Mindy Aisen '76, former head of the Department of Veterans Affairs Office of Research and Development and presently the director and CEO of the Cerebral Palsy International Research Foundation (CPIRF); Dr. Joelle Mast, chief medical officer, and Barbara Ladenheim, director of research, of Blythedale Children's Hospital; and Fletcher McDowell, former CEO of the Burke Rehabilitation Hospital and a member of the CPIRF board of directors.

MIT's work on robotic therapy devices is funded by CPIRF and the Niarchos Foundation, the Department of Veterans Affairs, the New York State NYSCORE, and the National Center for Medical Rehabilitation Research of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.



Yogyakarta

06 Juni 2009

Robot Penari Jaipong Dikonteskan

YOGYAKARTA - Kontes robot kembali digelar. Kali ini akan berlangsung di UGM, 13-14 Juni 2009. Direktorat Penelitian dan Pengabdian kepada Masyarakat dan Direktorat Jenderal Pendidikan Tinggi menunjuk kampus tersebut sebagai tuan rumah penyelenggaraan kontes robot tingkat nasional.


Tiga jenis lomba pun dipertandingkan, yakni Kontes Robot Indonesia (KRI), Kontes Robot Cerdas Indonesia (KRCI), dan Kontes Robot Seni Indonesia (KRSI).


’’Lomba sedikit berbeda dari tahun-tahun sebelumnya. Kontes menambahkan satu jenis perlombaan, yakni Kontes Robot Seni Indonesia. Robot yang dilombakan harus bisa menari mengikuti irama yang telah ditentukan, yakni tarian jaipong,’’ papar Ilona, staf pengajar Fakultas MIPA UGM yang juga panitia penyelenggara, kemarin.


Perkaya Iptek


Menurut dia, kontes robot telah mentradisi karena hampir setiap tahun dilaksanakan. Pertama kali diadakan tahun 1993, kemudian berkembang menjadi Kontes Robot Cerdas Indonesia 2003. Kali ini, UGM mendapat dukungan Direktorat Penelitian dan Pengabdian kepada Masyarakat dan Direktorat Jenderal Pendidikan Tinggi.


Dia menjelaskan kontes robot dimaksudkan untuk memenuhi salah satu tujuan pendidikan tinggi, yaitu menumbuhkembangkan dan memperkaya khazanah ilmu pengetahuan dan teknologi guna meningkatkan taraf hidup masyarakat.


Setelah melalui perlombaan di wilayah regional I, regional II, regional III, dan regional IV, diperoleh hasil untuk KRI dan KRCI tahun 2009. Berdasarkan keputusan juri, tim yang berhak tampil dalam perlombaan tingkat nasional ada 24 tim KRI, 21 tim KRCI kategori wheeled, 9 tim KRCI kategori leeged, 9 tim KRCI kategori expert single, dan 16 tim KRCI kategori expert battle. (D19, P12-27)

Sedikit cerita tentang robot >>>

Robot

Robot humanoid
memainkan trompet

Robot adalah sebuah alat mekanik yang dapat melakukan tugas fisik, baik menggunakan pengawasan dan kontrol manusia, ataupun menggunakan program yang telah didefinisikan terlebih dulu (kecerdasan buatan). Robot biasanya digunakan untuk tugas yang berat, berbahaya, pekerjaan yang berulang dan kotor. Biasanya kebanyakan robot industri digunakan dalam bidang produksi. Penggunaan robot lainnya termasuk untuk pembersihan limbah beracun, penjelajahan bawah air dan luar angkasa, pertambangan, pekerjaan "cari dan tolong" (search and rescue), dan untuk pencarian tambang. Belakangan ini robot mulai memasuki pasaran konsumen di bidang hiburan, dan alat pembantu rumah tangga, seperti penyedot debu, dan pemotong rumput.

Perkembangan sekarang

Ketika para pencipta robot pertama kali mencoba meniru manusia dan hewan, mereka menemukan bahwa hal tersebut sangatlah sulit; membutuhkan tenaga penghitungan yang jauh lebih banyak dari yang tersedia pada masa itu. Jadi, penekanan perkembangan diubah ke bidang riset lainnya. Robot sederhana beroda digunakan untuk melakukan eksperimen dalam tingkah laku, navigasi, dan perencanaan jalur. Teknik navigasi tersebut telah berkembang menjadi sistem kontrol robot otonom yang tersedia secara komersial; contoh paling mutakhir dari sistem kontrol navigasi otonom yang tersedia sekarang ini termasuk sistem navigasi berdasarkan-laser dan VSLAM (Visual Simultaneous Localization and Mapping) dari ActivMedia Robotics dan Evolution Robotics.

Ketika para teknisi siap untuk mencoba robot berjalan kembali, mereka mulai dengan heksapoda dan platform berkaki banyak lainnya. Robot-robot tersebut meniru serangga dan arthropoda dalam bentuk dan fungsi. Tren menuju jenis badan tersebut menawarkan fleksibilitas yang besar dan terbukti dapat beradaptasi dengan berbagai macam lingkungan, tetapi biaya dari penambahan kerumitan mekanikal telah mencegah pengadopsian oleh para konsumer. Dengan lebih dari empat kaki, robot-robot ini stabil secara statis yang membuat mereka bekerja lebih mudah. Tujuan dari riset robot berkaki dua adalah mencapai gerakan berjalan menggunakan gerakan pasif-dinamik yang meniru gerakan manusia. Namun hal ini masih dalam beberapa tahun mendatang.


Masalah teknis lain yang menghalangi penerapan robot secara meluas adalah kompleksitas penanganan obyek fisik dalam lingkungan alam yang tetap kacau. Sensor taktil dan algoritma penglihatan yang lebih baik mungkin dapat menyelesai Masalah teknis lain yang menghalangi penerapan robot secara meluas adalah kompleksitas penanganan obyek fisik dalam lingkungan alam yang tetap kacau. Sensor taktil dan algoritma penglihatan yang lebih baik mungkin dapat menyelesaikan masalah ini. Robot Online UJI dari University Jaume I di Spanyol adalah contoh yang bagus dari perkembangan yang berlaku dalam bidang ini.

Belakangan ini, perkembangan hebat telah dibuat dalam robot medis, dengan dua perusahaan khusus, Computer Motion dan Intuitive Surgical, yang menerima pengesahan pengaturan di Amerika Utara, Eropa dan Asia atas robot-robotnya untuk digunakan dalam prosedur pembedahan minimal. Otomasi laboratorium juga merupakan area yang berkembang. Di sini, robot benchtopdigunakan untuk memindahkan sampel biologis atau kimiawi antar perangkat seperti inkubator, berupa pemegang dan pembaca cairan. Tempat lain dimana robot disukai untuk menggantikan pekerjaan manusia adalah dalam eksplorasi laut dalam dan eksplorasi antariksa. Untuk tugas-tugas ini, bentuk tubuh artropoda umumnya disukai. Mark W. Tilden dahulunya spesialis Laboratorium Nasional Los Alamos membuat robot murah dengan kaki bengkok tetapi tidak menyambung, sementara orang lain mencoba membuat kaki kepiting yang dapat bergerak dan tersambung penuh.

Robot bersayap eksperimental dan contoh lain mengeksploitasi biomimikri juga dalam tahap pengembangan dini. Yang disebut "nanomotor" dan "kawat cerdas" diperkirakan dapat menyederhanakan daya gerak secara drastis, sementara stabilisasi dalam penerbangan nampaknya cenderung diperbaiki melalui giroskop yang sangat kecil. Dukungan penting pekerjaan ini adalah untuk riset militer teknologi pemata-mataan.


Roboholic Article's

Robot Edukasi untuk Pembelajar Robotik

Robot Edukasi merupakan robot edukasi yang menggunakan robot controller berbasis mikrokontroler Microchip PIC atau Atmel AVR yang dikembangkan oleh NEXT SYSTEM Robotics Learning Center.

Robot Edukasi dilengkapi dengan sejumlah komponen pendukung seperti motor DC (lengkap dengan gear box dan roda), dukungan terhadap motor servo, line sensor, sound sensor, light sensor, touch sensor, dan modul infrared modulated receiver. Selain itu, tersedia sejumlah sensor tambahan yang bersifat opsional, seperti sensor temperatur, sensor ultrasonik, sensor api, dan yang lainnya.

Robot Edukasi dapat digunakan untuk pembelajaran otomasi, mengingat sejumlah pernak-pernik pendukung pembelajaran tersebut sudah ditanam di dalamnya.

Set sudah dilengkapi hardware programmer USB atau Parallel. Dengan demikian, pengguna laptop terkini, yang umumnya hanya dilengkapi dengan port USB, dapat melakukan pemrograman dengan mudah dan nyaman.

Banyak aplikasi robotik yang bisa dikembangkan dengan set ini, seperti: robot line follower, robot light follower, robot obstacle avoidance, remote controlled robot, robot pemadam api, robot sumo, dan yang lainnya. Aplikasi-aplikasi ini dibahas tuntas dalam kelas pelatihan Pemrograman Mikrokontroler dan Aplikasinya dalam Robotik, sekaligus ditunjukkan bahwa mengembangkan aplikasi robotik tidaklah serumit dan sesulit yang dibayangkan.

Robot Edukasi dapat diprogram dengan Bahasa C, BASIC, Pascal, serta Flowchart.

Dalam waktu dekat, NEXT SYSTEM Robotics Learning Center akan meluncurkan buku pembelajaran robotik yang membahas Robot Edukasi secara komprehensif - Bermain Mikrokontroler dan Robotika.


Brought to you by : Admin

Robot Line Tracer

Requested by : Vanillaku

| October 11, 2008

robotedukasi2Robot Line Tracer adalah satu diantara sekian banyak robot untuk pendidikan yang dikembangkan oleh NEXT SYSTEM Robotics Learning Center, yang dapat digunakan untuk pembelajaran dan partisipasi dalam kompetisi line tracing atau line following.

Robot menggunakan dua buah mikrokontroler, satu untuk pengendali pusat (mikrokontroler AVR atau PIC), dan yang kedua untuk pengendali motor (mikrokontroler MCS51). Selama robot bergerak, pengendali pusat akan membaca informasi sensor, kemudian mengirimkan perintah pergerakan yang sesuai kepada pengendali motor. Setelah perintah dikirimkan, pengendali pusat bisa mengerjakan tugas lainnya.

Modul sensor cahaya yang terpasang menggunakan LDR, namun dapat digantikan dengan modul sensor lain yang terdiri dari pasangan IR LED dan Phototransistor, atau modul sensor cahaya lainnya. Penggantian modul sensor ini sangat mudah karena mainboard menyediakan sejumlah port terbuka.

Robot menggunakan gearbox yang dapat memberikan torsi yang cukup untuk memutar roda robot hingga lebih dari 100 rpm dengan kondisi beban penuh (termasuk beban batere).

Robot Line Following memberikan kesempatan yang luas dan terbuka untuk penerapan berbagai teknik dan metoda, agar robot dapat bergerak dengan mulus dan cepat pada lintasan yang disediakan.

Robot Line Following dapat diprogram dengan Bahasa Assembly, Bahasa C, Bahasa BASIC dan Bahasa Pascal.

Modul yang digunakan merupakan modul terbuka, yang dapat dikembangkan dengan leluasa untuk berbagai aplikasi.

Untuk informasi lebih lanjut mengenai Robot line tracer silahkan kunjungi Sman 20 Surabaya


Brought to you by : Admin