This challenge evaluates algorithms for object detection and image classification at large scale. This year there will be two competitions:
- A PASCAL-style detection challenge on fully labeled data for 200 categories of objects, and
- An image classification plus object localization challenge with 1000 categories.
NEW: This year all participants are encouraged to submit object localization results; in past challenges, submissions to classification and classification with localization tasks were accepted separately.One high level motivation is to allow researchers to compare progress in detection across a wider variety of objects -- taking advantage of the quite expensive labeling effort. Another motivation is to measure the progress of computer vision for large scale image indexing for retrieval and annotation... (rules and results)
New York Times article:
Started in 2010 by Stanford, Princeton and Columbia University scientists, the Large Scale Visual Recognition Challenge this year drew 38 entrants from 13 countries. The groups use advanced software, in most cases modeled loosely on the biological vision systems, to detect, locate and classify a huge set of images taken from Internet sources like Twitter. The contest was sponsored this year by Google, Stanford, Facebook and the University of North Carolina.
Contestants run their recognition programs on high-performance computers based in many cases on specialized processors called G.P.U.s, for graphic processing units.
This year there were six categories based on object detection, locating objects and classifying them... (cont'd)
Rising Colorspace is a site-specific installation for “Metropol Park“ at Köllnischer Park 6-7 in Berlin. It is the third systemic robot installation of our colorspace series.
A robot draws his rising movements as bowlike lines onto a shiny metallic wall. All turns and falling movements are left out. Thereof derives a wickerwork of lines in rich botanic coloration. This condenses to a colorspace. Each line grows like a bending culm and modulates the colorspace after a daily color-protocol. Rising Colorspace is an evolving system continuously overwriting itself... (cont'd)
From BlueRobotics' Kickstarter:
An efficient, rugged, affordable underwater thruster to propel the future of marine robotics and ocean exploration. ($100 dollars a thruster, est delivery Nov 2014)
The T100 is made of high-strength, UV resistant polycarbonate injection molded plastic. The core of the motor is sealed and protected with an epoxy coating and it uses high-performance plastic bearings in place of steel bearings that rust in saltwater. Everything that isn’t plastic is either aluminum or high-quality stainless steel that doesn’t corrode.
A specially designed propeller and nozzle provides efficient, powerful thrust while active water-cooling keeps the motor cool. Unlike other thrusters, our design doesn’t have any air- or oil-filled cavities - water flows freely through all parts of the motor while it's running. That means it can go deep in the ocean and handle extreme pressures.
The thruster is easy to use: just connect the three motor wires to any brushless electronic speed controller (ESC) and you can control it with an RC radio or a microcontroller. It's usable with Arduino, ArduPilot, Raspberry Pi, BeagleBone, and many other embedded platforms... (kickstarter)
From hitchBOT's page:
I am hitchBOT — a robot from Port Credit, Ontario.
I am traveling from Halifax, Nova Scotia to Victoria, British Columbia this summer. As you may have guessed, robots cannot get driver’s licences yet, so I’ll be hitchhiking my entire way... (cont'd)
Features ($192 ):
Tegra K1 SOC
- Kepler GPU with 192 CUDA cores
- 4-Plus-1 quad-core ARM Cortex A15 CPU
- 2 GB x16 memory with 64 bit width
- 16 GB 4.51 eMMC memory
- 1 Half mini-PCIE slot
- 1 Full size SD/MMC connector
- 1 Full-size HDMI port
- 1 USB 2.0 port, micro AB
- 1 USB 3.0 port, A
- 1 RS232 serial port
- 1 ALC5639 Realtek Audio codec with Mic in and Line out
- 1 RTL8111GS Realtek GigE LAN
- 1 SATA data port
- SPI 4MByte boot flash
Dr. Dobbs has an in depth look here.
Ino tools webpage:
Ino is a command line toolkit for working with Arduino hardware
It allows you to:
- Quickly create new projects
- Build a firmware from multiple source files and libraries
- Upload the firmware to a device
- Perform serial communication with a device (aka serial monitor)
Ino may replace Arduino IDE UI if you prefer to work with command line and an editor of your choice or if you want to integrate Arduino build process to 3-rd party IDE.
Ino is based on make to perform builds. However Makefiles are generated automatically and you’ll never see them if you don’t want to.
- Simple. No build scripts are necessary.
- Out-of-source builds. Directories with source files are not cluttered with intermediate object files.
- Support for *.ino and *.pde sketches as well as raw *.c and *.cpp.
- Support for Arduino Software versions 1.x as well as 0.x.
- Automatic dependency tracking. Referred libraries are automatically included in the build process. Changes in *.h files lead to recompilation of sources which include them.
- Pretty colorful output.
- Support for all boards that are supported by Arduino IDE.
- Fast. Discovered tool paths and other stuff is cached across runs. If nothing has changed, nothing is build.
- Flexible. Support for simple ini-style config files to setup machine-specific info like used Arduino model, Arduino distribution path, etc just once.
From Woods Hole Oceanographic Institution's Vimeo page:
In 2013, a team from the Woods Hole Oceanographic Institution took a specially equipped REMUS "SharkCam" underwater vehicle to Guadalupe Island in Mexico to film great white sharks in the wild. They captured more than they bargained for.
Today, Antoine Cully at the Sorbonne University in Paris and a couple of pals say they’ve developed a technique that allows a damaged robot to learn how to walk again in just a few seconds. They say their work has important consequences for the reliability and robustness of future robots and may also provide some insight into the way that animals adapt to injury as well... (cont'd)
From JIBO's Indiegogo campaign:
Friendly, helpful and intelligent. From social robotics pioneer Dr. Cynthia Breazeal.
From Tilman Griesel's posts on DIY Drones:
Open source project to make FPV(first person view) with the raspberry pi easy to use for every one.
OpenFPV is a project that uses latest technology to provide low latency, easy and well tested open source FPV flying. Based on single-board computers, HD cameras and IEEE 802.11.
- Web interface
- Low-Latency H264 Streaming (≈120ms)
- RESTful API
- Installer (in progress)
- Minimal bettery consumption
- Invite more people to join the development team
- Complete the installer
- More field tests with different setups
- Create desktop applications for mac/win with HUD support
- Add OculusVR support
Gaurav Trivedi has put together a landing page with all of Quoc Le’s recent lecture series and accompanying resources:
Dr. Quoc Le from the Google Brain project team (yes, the one that made headlines for creating a cat recognizer) presented a series of lectures at the Machine Learning Summer School (MLSS ’14) in Pittsburgh this week. This is my favorite lecture series from the event till now and I was glad to be able to attend them... (cont'd with all videos)
Parallella Computer Specifications:
The Parallella platform is an open source, energy efficient, high performance, credit-card sized computer based on the Epiphany multicore chips developed by Adapteva. This affordable platform is designed for developing and implementing high performance, parallel processing applications developed to take advantage of the on-board Epiphany chip. The Epiphany 16 or 64 core chips consists of a scalable array of simple RISC processors programmable in C/C++ connected together with a fast on chip network within a single shared memory architecture... (cont'd)
A realtime raytracing example running on the 16-core Epiphany chip:
From Eben Upton, Raspberry Pi Founder:
This isn’t a “Raspberry Pi 2″, but rather the final evolution of the original Raspberry Pi. Today, I’m very pleased to be able to announce the immediate availability, at $35 – it’s still the same price, of what we’re calling the Raspberry Pi Model B+.
The Model B+ uses the same BCM2835 application processor as the Model B. It runs the same software, and still has 512MB RAM; but James and the team have made the following key improvements:
- More GPIO. The GPIO header has grown to 40 pins, while retaining the same pinout for the first 26 pins as the Model B.
- More USB. We now have 4 USB 2.0 ports, compared to 2 on the Model B, and better hotplug and overcurrent behaviour.
- Micro SD. The old friction-fit SD card socket has been replaced with a much nicer push-push micro SD version.
- Lower power consumption. By replacing linear regulators with switching ones we’ve reduced power consumption by between 0.5W and 1W.
- Better audio. The audio circuit incorporates a dedicated low-noise power supply.
- Neater form factor. We’ve aligned the USB connectors with the board edge, moved composite video onto the 3.5mm jack, and added four squarely-placed mounting holes... (cont'd)
Records 1 to 15 of 300