Thursday, February 23, 2017

Parallella: A Supercomputer For Everyone

Making parallel computing easy to use has been described as "a problem as hard as any that computer science has faced". With such a big challenge ahead, we need to make sure that every programmer has access to cheap and open parallel hardware and development tools. Inspired by great hardware communities like Raspberry Pi and Arduino, we see a critical need for a truly open, high-performance computing platform that will close the knowledge gap in parallel programing. The goal of the Parallella project is to democratize access to parallel computing. If we can pull this off, who knows what kind of breakthrough applications could arise?  Maybe some of them will even change the world in some small but positive way.

The Parallella Computing Platform

To make parallel computing ubiquitous, developers need access to a platform that is affordableopen, and easy to use. The goal of the Parallella project is to provide such a platform! The Parallella platform will be built on the following principles:
  • Open Access: Absolutely no NDAs or special access needed! All architecture and SDK documents will be published on the web as soon as the Kickstarter project is funded.
  • Open Source: The Parallella platform will be based on free open source development tools and libraries. All board design files will be provided as open source once the Parallella boards are released.
  • Affordability: Hardware costs and SDK costs have always been a huge barrier to entry for developers looking to develop high performance applications. Our goal is to bring the Parallella high performance computer cost below $100, making it an affordable platform for all.
The Parallella platform is based on the Epiphany multicore chips developed by Adapteva over the last 4 years and field tested since May 2011. The Epiphany chips consists of a scalable array of simple RISC processors programmable in C/C++ connected together with a fast on chip network within a single shared memory architecture.

Parallella Computer Specifications

The following list shows the major components planned for the Parallella computer:
  • Zynq-7010 Dual-core ARM A9 CPU
  • Epiphany Multicore Accelerator (16 or 64 cores)
  • 1GB RAM 
  • MicroSD Card
  • USB 2.0 (two) 
  • Two general purpose expansion connectors
  • Ethernet 10/100/1000
  • HDMI connection
  • Ships with Ubuntu OS
  • Ships with free open source Epiphany development tools that include C compiler, multicore debugger, Eclipse IDE, OpenCL SDK/compiler, and run time libraries. 
  • Dimensions are 3.4'' x 2.1''  
Once completed, the 64-core version of the Parallella computer would deliver over 90 GFLOPS of performance and would have the the horse power comparable to a theoretical 45 GHz CPU [64 CPU cores * 700MHz] on a board the size of a credit card while consuming only 5 Watts under typical work loads. For certain applications, this would provide raw performance than a high end server costing thousands of dollars and consuming 400W. 

The Team Behind Parallella

The Parallella project is being launched by Adapteva, a semiconductor startup company founded in 2008. The core development team consists of Andreas Olofsson, Roman Trogan, and Yaniv Sapir, each with between 10 and 20 years of industry experience. The team has a strong reputation of executing on aggressive goals on a shoestring budget. Our latest Epiphany-IV processor was designed in a leading edge 28nm process and started sampling in July, demonstrating 50 GFLOPS/Watt.  To put this in perspective, consider that the Epiphany energy efficiency specs are within striking distance of the 2018 goals set by DARPA for the high profile Exascale supercomputing project.
Parallella Computer Development Work
  • All major IC components have already been selected for the Parallella board, but cost minimization will continue.
  • We will be engaging with an experienced external board product design team to complete the design and layout of the Parallella boards.
  • We will work with internal and external resources to seamlessly integrate the Epiphany coprocessor drivers and development tools with the Ubuntu distribution currently running on the reference platform.
Production
  • Buying in bulk significantly reduces the cost of the platform.  Without the  large batch build enabled by this project, the cost of the Parallella boards would be many times higher.
Except for the Epiphany multiprocessor chips, the Parallella computer is a  fairly standard ARM based low cost single board computer, giving us confidence that we will be able to meet our size and cost constraints.

It's Time

We don't have time to wait for the rest of the industry to come around to the fact that parallel computing is the only path forward and that we need to act now. We hope you will join us in our mission to change the way computers are built. We could put 1,000 cores on a single chip in two years.  Are you ready for that?
What will come out of it?  We don't know but we do know that the following applications are DESPERATE for more efficient processing and are stalling today because bigger companies aren't serving their needs.
Consumer: Small energy efficient computer media box console emulator movie rendering
Imaging: face detection/recognition, finger print matching, object tracking, stereo vision, gesture recognition, remote sensing, video-analytics, manufacturing inspection, augmented-overlay
Communication: video conferencing, network monitoring, deep packet inspection, software defined networking,
Automotive: autonomous driving, driver assist, fog penetration, glare reduction, holographic heads up display, intersection traffic monitor,
High Performance Computing: real-time internet stream analytics, real-time market analytics, portable in the field supercomputing, soft encryption engine, code breaker, data logger, in the field seismology processing
Medical: portable ultrasound, dna sequencing,
Robotics: robotics brain, space electronics, robotics sensor unit, multi sensor inertial navigation
Speech: real time speech recognition, realistic speech synthesis, real time translation, speaker verification
Unmanned Aerial Vehicles: synthetic aperture radar, hyperspectral imaging, IR imaging, smart stream compression, large focal array sensor imaging, autonomous flight,
Wireless Communication: GNU radio, cognitive radio, small cell base stations
Share:

Sunday, January 1, 2017

Augmented Reality – What is it?

Although this site is dedicated to virtual reality, you cannot discuss it without mentioning its very close cousin augmented reality, but what is it?
Whereas virtual reality immerses your senses completely in a world that only exists in the digital realm, augmented reality takes the real world of the present and projects digital imagery and sound into it. Augmented and Virtual Reality both fall on the continuum of mediated reality. Which is where a computer system modifies our perception of reality versus the “real” world.
As you can probably deduce this means many things qualify as augmented reality. The heads up displays we see in some aircraft and cars that may show you things like “distance to a target”, GPS position or your current speed are a form of augmented reality. Events with digital avatars of deceased musicians such as Michael Jackson and Tupac Shakur projected onto a screen using the Pepper’s Ghost illusion would also qualify under a broad definition of augmented reality.
However, when we hear about augmented reality these days it usually refers to a much more sophisticated, interactive and spatially aware implementation of the concept. Where digital objects such as 3D models or video are projected onto our view of reality as if they were really there.

How Does Augmented Reality Work?


The type of augmented reality you are most likely to encounter uses a range of sensors (including a camera), computer components and a display device to create the illusion of virtual objects in the real world.
Thanks to the popularity of smartphones, which have all the necessary components, they have been the place most commercial augmented reality applications that have been released.
In general the device looks for a particular target. This can be anything, but usually it’s a 2D image printed on something like a movie poster. Once the augmented reality application recognizes the target via the camera it processes the image and augments it in some way with pictures and sound. For example, you may see the movie poster spring to life and play a trailer for the film. As long as you look at the poster through the “window” of the display you can see augmented reality instead of plain old vanilla reality.
By using smart algorithms and other sensors such as accelerometers and gyroscopes the device can keep the augmented elements aligned with the image of the real world.
Using a smartphone or tablet computer as a “magic window” into the augmented world is one way we can relay this digital info to our eyes, but there are many other ways to achieve this.
Digital imagery can be projected directly onto physical objects. This is known as projection mapping and can be used to quite striking effect. For example, the Dyadic Mano-a-Mano uses projectors and Microsoft Kinect sensors to provide the user with 3D digital imagery projected directly onto the environment. The user doesn’t need to wear equipment or use any devices. Interaction with this system is highly natural and intuitive.

Projection mapping for augmented reality

Projection mapping as an augmented reality method has a lot of potential, but it requires a controlled and mapped space in order to work. The method that is most likely to supplant smartphone augmented reality as a common implementation outside of the laboratory is one that uses head mounted systems. This is where virtual and augmented reality really begin to converge, as there is no real reasons why the head mounted systems used by both technologies cannot be cross-functional. Indeed, head mounted systems that use smartphones to work often have something known as a camera “pass-through”. In other words, although you can’t see anything other than the screen of the head mounted display (HMD) it can show you the outside world via the rear-facing camera of the phone. This of course allows for augmented reality without the need for a handheld device. However, unless specially designed against it, this method leaves one feeling a bit disconnected from the experience, since the camera’s perspective and lack of depth perception don’t quite gel compared to what the naked eye sees.
One way to get around this is by using a system as found in the Google Glass and Microsoft Hololens. Both of these devices use something known as a “prism projector”. The eyes of the user look out at the world unimpeded, but digital imagery is projected into the prism projection system that sits between the eye and the outside world, making it appear as if those objects are really there, sitting on a table or hanging against the wall. The way this HMD achieves this is complex and fascinating and it discussed more completely in our Hololens article.
There are many ways to achieve the goal of augmented reality, but as you can see the end result is that we see digital information blend with the analogue world. Something that has many, many applications. Some of which we’ll take a closer look at.

Applications of Augmented Reality

Augmented reality has a wide range of applications in several industries and thanks to the rise of consumer smart devices and overall computing technology developments it now has lots of potential in the mainstream consumer space as well.
The two areas where we have seen a lot of commercial development in augmented reality are education and gaming.
The two biggest mainstream video game consoles, the Xbox and Playstation, have included augmented reality capabilities for the last two console generations. These game in the form of the Kinect (for the Xbox) and Playstation Eye or Camera (for the Playstation 3 and 4 respectively). Because you’re facing both the camera and the screen in these implementations are more like augmented reality mirrors, where you see yourself “in” the game and can interact with game characters that look to be in the same room as you.
Mobile augmented reality games are also not rare, and can be found on smartphones, tablets and handheld consoles such as the Nintendo 3DS and Playstation Vita.
Seeing the potential for augmented reality in education isn’t hard. It’s being implemented in fields such as medicine where students can benefit from live 3D models. It’s possible to use existing learning material (such as pages from a textbook) as targets for augmented reality. So when viewed through the lens of a smartphone you can see that picture of an engine animate in an engineering textbook or a working 3D model of a beating heart that you can walk around of rotate by hand.
In medical practice augmented reality can project information directly onto the body of a patient. For example, the Veinviewer system projects a real-time image of infrared vein scans directly onto the patient’s skin. Creating the impression that the skin is transparent. This allows the clinician to “see” the veins directly.
Military use cases are also quite clear, since soldiers wearing heads up displays (HUDs) can see information tagged onto objects in the real world. Radar information, orders or any other relevant sensor data from devices on the network that can provide it. Enemy and friendly positions are of course also useful to know. Augmented reality clearly has a bright future in military applications.
Mobile phones especially the iPhone use augmented reality apps which allow you to view computer generated images that have been superimposed over real world images. An example of this is an app which helps you to find a restaurant: it does this by displaying restaurant signs/logos as you move in a particular direction.
Another useful type of app is a golf GPRS system which helps golfers around a course. It displays yardages for each of the 18 holes, shows where the hazards are, e.g. bunkers and advice and support on improving your game. If you are golfer then this app will appeal to you immensely – look for the Golfscape Augmented Reality Rangefinder from the Apple store.
Augmented reality is also used in marketing and advertising as a means of enhancing certain aspects of a product in order to make it more attractive which will boost sales. This is discussed in more detail in our augmented reality marketing article.

10 apps that enhance reality


1. Prynt

This crowdfunded adaptor attaches to your smartphone, allowing you to instantly print out 2x3in photos. Also when you take a picture, the Prynt app automatically records a digital video and creates a link to that photo on your phone.

2. Bamzooki

This CBBC programme, broadcast between 2004 and 2010, provided an early large-scale glimpse into the potential of AR. Children could create digital creatures on a website that were then pitted against each other in a TV studio, while the audience watched via a graphics system that mixed the creatures into the real world. There is talk at the BBC of a similar programme returning, one that potentially allows children to pit their creatures against each other on the kitchen table or in the playground.

3. Lego

Most Lego stores now feature widescreen TV on the wall, relaying a continual live feed of the view directly in front of the screen. If you step into the camera’s field with a box of Lego, a virtual, animated representation of the kit inside the box will come to life in front you.

4. Inkhunter

This simple app lets you see what a tattoo will look like on your body by superimposing the image on to the camera feed from your smartphone. The app comes with a range of tattoo designs that can be edited and shared with friends. It’s also possible to upload your own designs to see how well they suit you.

5. Star Chart

An elegant and hugely popular AR app for smartphones that annotates the night sky. Simply angle your phone toward the heavens and, by using GPS positioning, the app will label the stars, moons and planets above you. A quicker and potentially more effective way of learning about our galaxy than any text book.

6. Snapchat

In the past year, the social media platform, whose videos are reportedly watched more than 6bn times a day, subtly shifted its focus on to AR through a selection of lenses. These lenses modify the phone’s camera feed in humorous ways, swapping people’s faces, “embiggening” their eyes, allowing them, for example, to puke rainbows. Snapchat became the first company to make a profit from AR on a large scale by selling and allowing companies to sponsor lenses.

7. Google Translate

Google’s revelatory translation technology, underpinned by computer learning to constantly improve its interpretation, can now be used to annotate foreign text on the fly, using your phone’s camera feed. While the technology isn’t great at handling reams of text, particularly in non-roman scripts, it can prove invaluable at handling signage.

8. Arnatomy

Perhaps the closest to Tom Caudell’s original vision [see main story] for AR, ARnatomy is able to identify and digitally label bones and muscles for medical students.
It uses the camera to identify replicas of human bones, and when the user places a bone in front of the camera, it identifies which bone it is and adds visuals pointing to the parts of the bone, such as where muscles attach.

9. Smartspecs

SmartSpecs enhance the visual appearance of everyday objects to augment vision for partially sighted people. By a combination of a 3D camera and Android-powered software, the specs highlight the edges and features of nearby objects – from walls, tables, doorways, signposts, buggies, and even faces – to enhance visibility.

10. Quiver

This relatively simple yet effective use of AR brings colouring books to life. Print out the black and white drawings, colour them in in the traditional manner. When you view the completed picture through your phone’s camera using the Quiver app the picture comes to animated life on screen. Touch the screen and you can interact and play games with the character you have created.

Conclusion

Augmented reality is likely to worm its way into our daily lives more and more in the 21st century. Once wearable computers become more common it won’t be strange to see people interacting with and reacting to things that aren’t there from your perspective. Thanks to technologies such as augmented reality the way we work with computing devices and think about the divide between digital and analogue reality is likely to change fundamentally. Nothing is stopping you from experiencing augmented reality for yourself today though. Just hop onto your smartphone’s app store and search for “AR” apps. There are plenty to try, many of them free.

Share:

Friday, December 16, 2016

New computer friendly technology built for autistic children

New computer friendly technology built for autistic children

For children suffering from autism, reading for understanding is often challenging but with the help of new technology they can realise their actual potential.


'Hour of Code', a global campaign by Microsoft, commits to increase access to computer science education to youth with a focus on specially-abled children.

The initiative along with c-- an educational hub helping the cause of mentally challenged and autistic children, seeks to introduce coding and computational thinking to promote I-T based opportunities among such children.

"Technology is a promoter of your potential and it doesn't pose as such any threat to the autistic. Such children usually do not want to be told what to do, also they usually do not like eye contact. Therefore, with the technology they are able to do things themselves. It is the only way forward for such children."
Share:

Monday, December 12, 2016

4D Visualization


4D-THE MODERN DIMENSION

"4D" is shorthand for "four-dimensional"- the fourth dimension being time. 4D visualization takes three-dimensional images and adds the element of time to the process.

In contrast to 3D imaging diagnostic processes, 4D allows doctor to visualize internal anatomy moving in real-time. For example: Movement patterns of fetuses allows conclusions to be drawn about their development; increase of accuracy in ultrasound guided biopsies thanks to the visualization of needle movements in real time in all 3 planes. So physicians and sonographers can detect or rule out any number of issues, from vascular anomalies and genetic syndromes


Concept Of 4D Visualization

In the field of scientific visualization, the term "four dimensional visualization" usually refers to the process of rendering a three dimensional field of scalar values. While this paradigm applies to many different data sets, there are also uses for visualizing data that correspond to actual four-dimensional structures. Four dimensional structures have typically been visualized via wire frame methods, but this process alone is usually insufficient for an intuitive understanding. The visualization of four dimensional objects is possible through wire frame methods with extended visualization cues, and through ray tracing methods. Both the methods employ true four-space viewing parameters and geometry.

4D Viewing Vectors and Viewing Frustum


METHOD:

4D-HAMMER, involves the following two steps:

(1) Rigid alignment of 3D images of a given subject acquired at different time points, in order to produce a 4D image. 3D-HAMMER is employed to establish the correspondences between neighboring 3D images, and then align one image (time t) to its previous-time image (t-1) by a rigid transformation calculated from the established
correspondences.

(2) Hierarchical deformation of the 4D atlas to the 4D subject images, via a hierarchical attribute-based matching method. Initially, the deformation of the atlas is influenced primarily by voxels with distinctive attribute vectors, thereby minimizing the chances of poor matches and also reducing computational burden. As the deformation proceeds, voxels with less distinctive attribute vectors gradually gain influence over the deformation

ALGORITHM:

it uses the ray tracing algorithm
Share:

Sunday, December 11, 2016

Bluetooth 5 set to boost the Internet of Things in 2017

Bluetooth 5:

The launch of Bluetooth 5, Bluetooth® technology continues to evolve to meet the needs of the industry as the global wireless standard for simple, secure connectivity. With 4x range, 2x speed and 8x broadcasting message capacity, the enhancements of Bluetooth 5 focus on increasing the functionality of Bluetooth for the IoT. These features, along with improved interoperability and coexistence with other wireless technologies, continue to advance the IoT experience by enabling simple and effortless interactions across the vast range of connected devices.



The global wireless standard for simple, secure connectivity.

Security

Bluetooth adheres to U.S. federal security regulations, ensuring that all Bluetooth devices are capable of meeting and exceeding strict government security standards.

Low Energy

The power-efficiency of Bluetooth with low energy functionality makes it perfect for devices that run for long periods on power sources, such as coin cell batteries or energy-harvesting devices. Bluetooth 5 offers the option of increased range or speed, and it’s always low energy.

Coexists with other technologies

Bluetooth 5 also includes updates that help reduce potential interference with other wireless technologies to ensure Bluetooth devices can coexist within the increasingly complex global IoT environment.


Application:

  • The new Bluetooth standard is now ready for action and should progressively roll out into consumer devices from 2017. Promising a serious increase in performance, Bluetooth 5 is ready to meet the needs of a new generation of connected gadgets with even faster data sharing.
  • Bluetooth is a widely used wireless communication standard for securely sharing data over short distances between two electronic devices via a specific set of radio waves.
  • The technology features in many connected devices, such as laptops, smartphones, wireless headphones and smartwatches, allowing them to connect and communicate wirelessly. Today, Bluetooth plays a key role in the development of the Internet of Things (IoT).


Share:

Thursday, December 8, 2016

Rose Lite LED Smart Bulb

What It Is?

Iota Lite is an app controllable Smart LED Bulb. The device opens the gate to a plethora of wireless features that’ll redefine the way you've perceived lighting. With Iota Lite in your room, you get access to a variety of unimaginable features that an ordinary LED bulb cannot provide. For instance, you can change the color of the bulb and set the ambiance of your house as per your likes. You get to pick a shade from a palette offering 16 M unique hue’s. You can also schedule the bulb to shuffle its color at a particular time. So next time you rise and shine to a good morning, the bulb will dimly glow in your room giving you the perfect early morning experience

Working principle:

IOTA Lite is packed with a Texas Instruments (TI) processor and is fitted with a Toshiba LED that provides a lifespan of 15000 hours.

The LED technology makes it highly efficient and durable. Lite uses low energy Bluetooth technology to connect with your smartphone.

Application:

A Color for Every Mood!

From the ideal lighting for a party to when you need to relax, this versatile smart bulb can adapt to almost every occasion. Set the colours that you want this smart bulb to emit. You can set the hues to be random or you can make it exude a select few colours, like blues, greens and purples.


Enjoy Spectacular Music & Sound show!

You can sync the Iota smart bulb with music using your mobile device to create the perfect ambience for a celebration. The Lite app lets you synchronize your playlist with the device. So once you play a track, the lighting changes as per the song’s rhythm. Now amaze your friends by setting the right party ambiance at your home.

Call/SMS Alerts

Never miss an important call as Lite, with its unique call alert feature, will blink in a certain color if you’ve got a call. You can customize your app with absolute ease and can allow only a few selected contacts to alert you via Lite.

Wireless Light Control

Download Iota Lite app from the Playstore & Appstore   to wirelessly control this amazing bulb.Just connect Lite with your IOTA Lite app on Android 4.3 or higher and iOS 6 or higher devices.

Weather Alerts

Use IOTA's app to make this smart bulb exude a particular colour depending on weather conditions. Set a shade of blue for rainy weather or red for sunny weather.

Schedule & Timer

Now automate your home lighting. You can schedule Lite to turn on or change colors at a particular time.

Share:

Sunday, December 4, 2016

Google Nose

What is Google nose?
Google April Fool's Day post on Google Nose, a new service in beta that “leverages new and existing technologies to offer the sharpest olfactory experience available” may have been a joke, but it's not that far from reality.










Share: