Thursday, July 29, 2010


$8,000 - The Price Of Launching Your Own Satellite

Now, building your own, personal satellite and putting it into orbit is not such a far-fetched idea. Interorbital Systems, a small aerospace company based in Mojave, California, is selling kits to design and build small satellites for as low as $8,000. Randa and Roderick Milliron, the brains behind the programme, have been developing the bare-bones, low-cost rocket system for the past 14 years.

“Planet Earth has entered the age of the personal satellite with the introduction of Interorbital's TubeSat personal satellite (PS) Kit. The price of the TubeSat kit actually includes the price of a launch into Low-Earth-Orbit on an IOS NEPTUNE 30 launch vehicle. Since the TubeSats are placed into self-decaying orbits 310 kms above the Earth's surface, they do not contribute to the long-term build-up of orbital debris. After operating for a few months (the exact length of time on orbit is dependent on solar activity), they will safely re-enter the atmosphere and burn up. TubeSats are designed to be orbit-friendly,” the company explains.

The hexadecagon-shaped satellite weighs about 0.75 kg and is about the size of a tissue box.

“Selling flights as a package deal with satellite-building kits is proving to be a winning combination, with more than a dozen customers signed up to fly on the debut launch early next year,” reports Discovery News.

The company is all set to launch its first of the four sub-orbital test flights next month. Out of the 34 kits, 20 have already been sold to customers. "The acceptance and enthusiasm has been overwhelming," says Randa Milliron, chief executive officer and founder of Interorbital Systems.

Interorbital says a TubeSat is designed to function as a basic satellite bus or as a simple stand-alone satellite. Each TubeSat kit includes the satellite’s structural components, safety hardware, solar panels, batteries, power management hardware and software, transceiver, antennas, microcomputer and the required programming tools.

With these components alone, the builder can construct a satellite that puts out enough power to be picked up on the ground by a hand-held HAM radio receiver. Simple applications include broadcasting a repeating message from orbit or programming the satellite to function as a private orbital HAM radio relay station. These are just two examples. The TubeSat also allows the builder to add his or her own experiment or function to the basic TubeSat kit.

JAFFAR

JANAKIRAM



JANAKI RAM

M.Saranya

What is CAPTCHA and How it Works?

CAPTCHA or Captcha (pronounced as cap-ch-uh) which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart” is a type of challenge-response test to ensure that the response is only generated by humans and not by a computer. In simple words, CAPTCHA is the word verification test that you will come across the end of a sign-up form while signing up for Gmail or Yahoo account. The following image shows the typical samples of CAPTCHA.

Captcha

Almost every Internet user will have an experience of CAPTCHA in their daily Internet usage, but only a few are aware of what it is and why they are used. So in this post you will find a detailed information on how CAPTCHA works and why they are used.

What Purpose does CAPTCHA Exactly Serve?

CAPTCPA is mainly used to prevent automated software (bots) from performing actions on behalf of actual humans. For example while signing up for a new email account, you will come across a CAPTCHA at the end of the sign-up form so as to ensure that the form is filled out only by a legitimate human and not by any of the automated software or a computer bot. The main goal of CAPTCHA is to put forth a test which is simple and straight forward for any human to answer but for a computer, it is almost impossible to solve.

What is the Need to Create a Test that Can Tell Computers and Humans Apart?

For many the CAPTCHA may seem to be silly and annoying, but in fact it has the ability to protect systems from malicious attacks where people try to game the system. Attackers can make use of automated softwares to generate a huge quantity of requests thereby causing a high load on the target server which would degrade the quality of service of a given system, whether due to abuse or resource expenditure. This can affect millions of legitimate users and their requests. CAPTCHAs can be deployed to protect systems that are vulnerable to email spam, such as the services from Gmail, Yahoo and Hotmail.

Who Uses CAPTCHA?

CAPTCHAs are mainly used by websites that offer services like online polls and registration forms. For example, Web-based email services like Gmail, Yahoo and Hotmail offer free email accounts for their users. However upon each sign-up process, CAPTCHAs are used to prevent spammers from using a bot to generate hundreds of spam mail accounts.

Designing a CAPTCHA System

CAPTCHAs are designed on the fact that computers lack the ability that human beings have when it comes to processing visual data. It is more easily possible for humans to look at an image and pick out the patterns than a computer. This is because computers lack the real intelligence that humans have by default. CAPTCHAs are implemented by presenting users with an image which contains distorted or randomly stretched characters which only humans should be able to identify. Sometimes characters are striked out or presented with a noisy background to make it even more harder for computers to figure out the patterns.

Most, but not all, CAPTCHAs rely on a visual test. Some Websites implement a totally different CAPTCHA system to tell humans and computers apart. For example, a user is presented with 4 images in which 3 contains picture of animals and one contain a flower. The user is asked to select only those images which contain animals in them. This Turing test can easily be solved by any human, but almost impossible for a computer.

Breaking the CAPTCHA

The challenge in breaking the CAPTCHA lies in real hard task of teaching a computer how to process information in a way similar to how humans think. Algorithms with artificial intelligence (AI) will have to be designed in order to make the computer think like humans when it comes to recognizing the patterns in images. However there is no universal algorithm that could pass through and break any CAPTCHA system and hence each CAPTCHA algorithm must have to be tackled individually. It might not work 100 percent of the time, but it can work often enough to be worthwhile to spammers

BUTTERFLY MAN

A BUTTERFLY MAN

S.JAYA


Is your Nokia Cell Phone Original?

CHECK OUT!!!!!!!.............

Nokia is one of the largest selling phones across the globe. Most of us own a Nokia phone but are unaware of it’s originality. Are you keen to know whether your Nokia mobile phone is original or not? Then you are in the right place and this information is specially meant for you. Your phones IMEI (International Mobile Equipment Identity) number confirms your phone’s originality.


Press the following on your mobile *#06# to see your Phone’s IMEI number(serial number).

Then check the 7th and 8th numbers

Phone serial no. x x x x x x ? ? x x x x x x x

IF the Seventh & Eighth digits of your cell phone are 02 or 20 this means your cell phone was assembled in Emirates which is very Bad quality

IF the Seventh & Eighth digits of your cell phone are 08 or 80 this means your cell phone was manufactured in Germany which is fair quality

IF the Seventh & Eighth digits of your cell phone are 01 or 10 this means your cell phone was manufactured in Finland which is very Good

IF the Seventh & Eighth digits of your cell phone are 00 this means your cell phone was manufactured in original factory which is the best Mobile Quality

IF the Seventh & Eighth digits of your cell phone are 13 this means your cell phone was assembled in Azerbaijan which is very Bad quality and also dangerous for your health.


N.Vivega

Silent Sound Technology




It is really a big issue when you are on a important phone call in a noisy places such as in movie theaters, while travelling in bus, in noisy restaurants. But a new technology was unveiled at CeBIT fair on tuesday which was based on "Silent Sounds". This technology transforms the lip movements into a system genrated voice at the other end i.e, the listener who will be on the other end.

KIT (Karlsruhe Institute of Technology) have come up with a device which uses electromyography which will monitor the muscular movements that occurs when we speak. After sensing the movements they convert them into electrical pulses which in turn is converted into the speech without uttering a sound.

This new technology will be very helpful whenever a person looses his voice while speaking or allow people to make silent calls without disturbing others, even we can tell our PIN number to a trusted friend or relative without eavesdropping . At the other end, the listener can hear a clear voice. The awesome feature added to this technology is that "it is an instant polyglot" i.e, movements can be immediately transformed into the language of the user's choice. This translation works for languages like English, French & German. But, for the languages like Chinese, different tones can hold many different meanings. This poses problem said Wand. He also said that in five or may be in ten years this will be used in everyday's technology.

SHADIQ BASHA. A

TONGUE DRIVE SYSTEM TO OPERATE COMPUTERS

Scientists developed a new revolutionary system to help individuals with disabilities to control wheelchairs, computers and other devices simply by using their tongue.

Engineers at the Georgia Institute of Technology say that a new technology called Tongue Drive system will be helpful to individuals with serious disabilities, such as those with severe spinal cord injuries and will allow them to lead more active and independent lives.

Individuals using a tongue-based system should only be able to move their tongue, which is especially important if a person has paralyzed limbs. A tiny magnet, only a size of a grain of rice, is attached to an individual's tongue using implantation, piercing or adhesive. This technology allows a disabled person to use tongue when moving a computer mouse or a powered wheelchair.

Scientists chose the tongue to control the system because unlike the feet and the hands, which are connected by brain through spinal cord, the tongue and the brain has a direct connection through cranial nerve. In case when a person has a severe spinal cord injure or other damage, the tongue will remain mobile to activate the system. "Tongue movements are also fast, accurate and do not require much thinking, concentration or effort." said Maysam Ghovanloo, an assistant professor in the Georgia Tech School of Electrical and Computer Engineering.


The motions of the magnet attached to the tongue are spotted by a number of magnetic field sensors installed on a headset worn outside or an orthodontic brace inside the mouth. The signals coming from the sensors are wirelessly sent to a portable computer that placed on a wheelchair or attached to an individual's clothing.

The Tongue system is developed to recognize a wide array of tongue movements and to apply specific movements to certain commands, taking into account user's oral anatomy, abilities and lifestyle."The ability to train our system with as many commands as an individual can comfortably remember is a significant advantage over the common sip-n-puff device that acts as a simple switch controlled by sucking or blowing through a straw," said Ghovanloo.

The Tongue Drive system is touch-free, wireless and non-invasive technology that needs no surgery for its operation.

During the trials of the system, six able-bodied participants were trained to use tongue commands to control the computer mouse. The individuals repeated several motions left, right, up and down, single- and double-click to perform computer mouse tasks.

The results of the trials showed 100 percent of commands were accurate with the response time less than one second, which equals to an information transfer rate of approximately 150 bits per minute.

Scientists also plan to test the ability of the system to operate by people with severe disabilities. The next step of the research is to develop software to connect the Tongue Drive system to great number of devices such as text generators, speech synthesizers and readers. Also the researchers plan to upgrade the system by introducing the standby mode to allow the individual to eat, sleep or talk, while prolonging the battery life.

Source: National Science Foundation

3G Technology


Here is a simple introduction to some aspects of 3G radio transmission technologies (RTTs). You will find the subjects covered in this section useful if you later consider the more detailed discussions in the sections on 3G Standards and 3G Spectrum.

Simplex vs. Duplex

When people use walkie-talkie radios to communicate, only one person can talk at a time (the person doing the talking has to press a button). This is because walkie-talkie radios only use one communication frequency - a form of communication known as simplex:


Simplex: Using a walkie-talkie you have to push a button to talk one-way.

Of course, this is not how mobile phones work. Mobile phones allow simultaneous two-way transfer of data - a situation known as duplex (if more than two data streams can be transmitted, it is called multiplex):

Duplex: Allows simultaneous two-way data transfers.

The communication channel from the base station to the mobile device is called the downlink, and the communication from the mobile device back to the base station is called the uplink. How can duplex communication be achieved? Well, there are two possible methods which we will now consider: TDD and FDD.

TDD vs. FDD

Wireless duplexing has been traditionally implemented by dedicating two separate frequency bands: one band for the uplink and one band for the downlink (this arrangement of frequency bands is called paired spectrum). This technique is called Frequency Division Duplex, or FDD. The two bands are separated by a "guard band" which provides isolation of the two signals:


FDD: Uses paired spectrum - one frequency band for the uplink, one frequency band for the downlink.

Duplex communications can also be achieved in time rather than by frequency. In this approach, the uplink and the downlink operate on the same frequency, but they are switched very rapidly: one moment the channel is sending the uplink signal, the next moment the channel is sending the downlink signal. Because this switching is performed very rapidly, it does appear that one channel is acting as both an uplink and a downlink at the same time. This is called Time Division Duplex, or TDD. TDD requires a guard time instead of a guard band between transmit and receive streams.

Symmetric Transmission vs. Asymmetric Transmission

Data transmission is symmetric if the data in the downlink and the data in the uplink is transmitted at the same data rate. This will probably be the case for voice transmission - the same amount of data is sent both ways. However, for internet connections or broadcast data (e.g., streaming video), it is likely that more data will be sent from the server to the mobile device (the downlink).


FDD transmission is not so well suited for asymmetric applications as it uses equal frequency bands for the uplink and the downlink (a waste of valuable spectrum). On the other hand, TDD does not have this fixed structure, and its flexible bandwidth allocation is well-suited to asymmetric applications, For example, TDD can be configured to provide 384kbps for the downlink (the direction of the major data transfer), and 64kbps for the uplink (where the traffic largely comprises requests for information and acknowledgements).

Macro Cells, Micro Cells, and Pico Cells

The 3G network might be divided up in hierarchical fashion:

* Macro cell - the area of largest coverage, e.g., an entire city.
* Micro cell - the area of intermediate coverage, e.g., a city centre.
* Pico cell - the area of smallest coverage, e.g., a "hot spot" in a hotel or airport.


Why is there this sub-division of regions? It is because smaller regions (shorter ranges) allow higher user density and faster transmission rates. This is why they are called "hot spots".

TDD mode does not allow long range transmission (the delays incurred would cause interference between the uplink and the downlink). For this reason, TDD mode can only be used in environments where the propagation delay is small (pico cells). As was explained in the previous section on symmetric transmission vs. asymmetric transmission, TDD mode is highly efficient for transmission of internet data in pico cells.

TDMA vs. CDMA

We have considered how a mobile phone can send and receive calls at the same time (via an uplink and a downlink). Now we will examine how many users can be multiplexed into the same channel (i.e., share the channel) without getting interference from other users, a capability called multiple access. For 3G technology, there are basically two competing technologies to achieve multiple access: TDMA and CDMA.

TDMA is Time Division Multiple Access. It works by dividing a single radio frequency into many small time slots. Each caller is assigned a specific time slot for transmission. Again, because of the rapid switching, each caller has the impression of having exclusive use of the channel.

CDMA is Code Division Multiple Access. CDMA works by giving each user a unique code. The signals from all the users can then be spread over a wide frequency band. The transmitting frequency for any one user is not fixed but is allowed to vary within the limits of the band. The receiver has knowledge of the sender's unique code, and is therefore able to extract the correct signal no matter what the frequency.

This technique of spreading a signal over a wide frequency band is known as spread spectrum. The advantage of spread spectrum is that it is resistant to interference - if a source of interference blocks one frequency, the signal can still get through on another frequency. Spread spectrum signals are therefore difficult to jam, and it is not surprising that this technology was developed for military uses.

Finally, let's consider another robust technology originally developed by the military which is finding application with 3G: packet switching.

Circuit Switching vs. Packet Switching

Traditional connections for voice communications require a physical path connecting the users at the two ends of the line, and that path stays open until the conversation ends. This method of connecting a transmitter and receiver by giving them exclusive access to a direct connection is called circuit switching.

Most modern networking technology is radically different from this traditional model because it uses packet data. Packet data is information which is:

1. chopped into pieces (packets),
2. given a destination address,
3. mixed with other data from other sources,
4. transmitted over a line with all the other data,
5. reconstituted at the other end.

Packet-switched networks chop the telephone conversation into discrete "packets" of data like pieces in a jigsaw puzzle, and those pieces are reassembled to recreate the original conversation. Packet data was originally developed as the technology behind the Internet.


A data packet.

The major part of a packet's contents is reserved for the data to be transmitted. This part is called the payload. In general, the data to be transmitted is arbitrarily chopped-up into payloads of the same size. At the start of the packet is a smaller area called a header. The header is vital because the header contains the address of the packet's intended recipient. This means that packets from many different phone users can be mixed into the same transmission channel, and correctly sorted at the other end. There is no longer a need for a constant, exclusive, direct channel between the sender and the receiver.

Packet data is added to the channel only when there is something to send, and the user is only charged for the amount of data sent. For example, when reading a small article, the user will only pay for what's been sent or received. However, both the sender and the receiver get the impression of a communications channel which is "always on".

On the downside, packets can only be added to the channel where there is an empty slot in the channel, leading to the fact that a guaranteed speed cannot be given. The resultant delays pose a problem for voice transmission over packet networks, and is the reason why internet pages can be slow to load.

BY

ARUN PRAKASH (MSC.IT)

N.GOWRI

NEXI - The Robot Express Facial Emotions

Robot is a virtual or mechanical artificial agent made by man kind. A mechanism that can move automatically. It is refer to both physical robot and virtual software agents. Robot is an electric machine which has some ability to interact with physical objects and to be given electronic programming to do a specific task or to do a whole range of tasks or actions. It may also have some ability to perceive and absorb data on physical objects, or on its local physical environment, or to process data, or to respond to various stimuli. Robot doesn't show emotions because they don't have heart like humans. But there is robot that can be able to express human emotions with facial expressions, called NEXI.




NEXI is a robot displays facial emotions developed by MIT (Massachusett Institute of Technology) Lab's Personal Robots Group in collaboration with Prof. Rod Grupen at the University of Massachusetts-Amherst and two MIT robotic spin-off companies.



The head and face of NEXI were designed by by
Xitome Design with MIT. The expressive robotics started with a neck mechanism sporting 4 degrees of freedom (DoF) at the base, plus pan-tilt-yaw of the head itself. The mechanism has been constructed to time the movements so they mimic human speed. NEXI's face has been designed to use gaze, eyebrows, eyelids and an articulate mandible which helps in expressing a wide range of different emotions.




NEXI has a color CCD in each eye as well as an indoor active 3-D infrared camera in its head and four microphones to support sound localization. It has laser rangefinder which supports real-time tracking of objects, people and voices as well as indoor navigation and has hands that can be used to manipulate objects.




The chassis of NEXI is also advanced which it is based on the uBot5 mobile manipulator developed by the Laboratory for Perceptual Robotics UMASS (UniversityMassachusett) Amherst.The mobile base can balance dynamically on two wheels; Nexi has what amounts to a Segway-like body. The arms can pick up ten pounds; the plastic covering of the chassis can detect human touch.

Wednesday, July 28, 2010

FIND OUT IF U CAN
A man wanted to enter an exclusive club but did not know the password that was required. He waited by the door and listened. A club member knocked on the door and the doorman said, "twelve." The member replied, "six" and was let in. A second member came to the door and the doorman said, "six." The member replied, "three" and was let in. The man thought he had heard enough and walked up to the door. The doorman said, "ten" and the man replied, "five". But he was not let in. What should have he said?

JAFFAR
TRY IT
A man decides to buy a nice horse. He pays $60 for it, and he is very content with the strong animal. After a year, the value of the horse has increased to $70 and he decides to sell the horse. But already a few days later he regrets his decision to sell the beautiful horse, and he buys it again. Unfortunately he has to pay $80 to get it back, so he loses $10. After another year of owning the horse, he finally decides to sell the horse for $90. What is the overall profit the man makes?

VASANTH.VP

How HTML5 Will Shake Up the Web

HTML5, the next version of the markup language used to build Web pages, has attracted attention for its ability to show video inside a Web browser without using plug-ins, such as Adobe's Flash. But lesser-known features could ultimately have a much bigger impact on how users experience the Web.
Credit: Technology Review

Experts say that what HTML5 does behind the scenes--such as its network communications and browser storage features--could make pages load faster (particularly on sluggish mobile devices), make Web applications work more smoothly, and even enable browsers to read older Web pages more easily.

Many websites now act like desktop applications--Web-based office productivity suites and photo-editing tools, for example. But many of the sophisticated features of these sites depend on connections that developers create between different Web technologies, such as HTML, javascript, and cascading style sheets (CSS)--connections that don't always work perfectly. As a result, websites can be sluggish, may work differently from browser to browser, and can be vulnerable to security holes.

Bruce Lawson, who evangelizes about open Web standards at Opera Software, says that to make websites perform functions the Web wasn't originally designed for, developers must perform complex coding tasks that can easily introduce errors and make applications fail.

The group working on HTML5, Lawson says, was given the tall order of making the specification more forgiving than its predecessors so that older or improperly coded websites will work better in HTML5-enabled browsers. They also wanted to extend the specification forward to support modern trends such as rich Internet applications. "The basis of HTML5 is relentlessly pragmatic," he says. "It's designed to reflect what people are actually doing."

Experts point to a feature called Web Sockets as an example of the improvements that HTML5 can offer. Web Sockets provide a website with an application programming interface (API) that opens an ongoing connection between a page and a server, so that information can pass between them in real-time. Normally, the browser has to make a request every time it wants an update.

The effect of Web Sockets is something like moving from having a conversation via e-mail to having it via instant message, says Ben Galbraith, who cofounded the Web development site Ajaxian.com, and is director of developer relations at Palm. With e-mail, each message is sent as a single event, while instant messages allow for a smooth, ongoing conversation.

Web developers have previously devised ways to keep browsers and servers in constant communication, but Galbraith describes the techniques as "ingenious hacks" that are complicated to execute and don't scale well. Web Sockets, he says, promises an easy way for developers to create Web pages that change in real time--increasingly important with the proliferation of more sources of real-time data, such as instant status updates from social networking users. Users can expect to see Web applications with real-time feeds running more smoothly and with fewer errors.

HTML5 could also help Web applications work better when devices are disconnected from the Internet or intermittently connected, as is common with smart phones, says Alon Salant, who owns Carbon Five, a San Francisco-based company that specializes in building Web applications. A feature called Web Storage lets Web applications store more data inside the browser, retrieve it more intelligently, and control how browsers save parts of pages for faster reloading.

Galbraith is also excited about several features of the newest version of CSS that are designed to work with HTML5. These features will make Web pages more responsive to user input and allow for higher-quality graphics-- things that Web pages aren't normally good at. HTML5 allows developers to embed windows of animation onto a page, but Galbraith says new CSS functionality would perform better.

Lawson says users will also see improved performance from other features of HTML5. For example, improvements in the way browsers handle forms will reduce the amount of javascript needed and speed up page loading, particularly on mobile devices.

Chris Blizzard, Mozilla's director of evangelism, points to the significance of the HTML5 parser. A browser's parser reads the markup used to build a page and figures out how to display it. Blizzard says this is one of the most significant parts of the specification. It's meant to make browsers more interoperable, particularly in the way they handle badly written code. Instead of letting each browser maker decide how to handle imperfect code, the parser specifies what responses to errors should be. This should give users a more consistent experience, regardless of the browser they're using, he says.

While HTML5 seems to present a long list of big changes, Lawson says, the main purpose is to provide simpler ways to do what developers were already doing, making it less likely that they will make errors. Lawson says, "The greater the simplicity, the greater the robustness and therefore the greater the experience for the end user--that's the take I've got."

N.Vivega

Monday, July 26, 2010

The Future of Computer Technology

In the past twenty years, there has been a dramatic increase in the processing speed of computers, network capacity and the speed of the internet. These advances have paved the way for the revolution of fields such as quantum physics, artificial intelligence and nanotechnology. These advances will have a profound effect on the way we live and work, the virtual reality we see in movies like the Matrix, may actually come true in the next decade or so.

NANOCOMPUTERS

Scientists are trying to use nanotechnology to make very tiny chips, electrical conductors and logic gates. Using nanotechnology, chips can be built up one atom at
a time and hence there would be no wastage of space, enabling much smaller devices to be built. Using this technology, logic gates will be composed of just a few atoms and electrical conductors (called nanowires) will be merely an atom thick and a data bit will be represented by the presence or absence of an electron.

A component of nanotechnology, nanocomputing will give rise to four types of nanocomputers:

• Electronic nanocomputers
• Chemical and Biochemical nanocomputers
• Mechanical nanocomputers
• Quantum nanocomputers

Electronic nanocomputers
Eletronic nanocomputers are created through microscopic circuits using nanolithography. [Nanocomputers]

Chemical and Biochemical nanocomputers
The interaction between different chemicals and their structures is used to store and process information in chemical nanocomputers. In order to create a chemical nanocomputer, engineers need to be able to control individual atoms and molecules so that these atoms and molecules can be made to perform controllable calculations and data storage tasks.

Mechanical nanocomputers
A mechanical nanocomputer uses tiny mobile components called nanogears to encode information. Some scientists predict that such mechanical nanocomputers will be used to control nanorobots.

Quantum nanocomputers
A quantum nanocomputer store data in the form of atomic quantum states or spin. Single-electron memory (SEM) and quantum dots are examples of this type of technology.

Humanizing Nanocomputers
Apart from this, scientists aim to use nanotechnology to create nanorobots
that will serve as antibodies that can be programmed. This will help to protect humans against pathogenic bacteria and viruses that keep mutating rendering many remedies ineffective against new strains. Nanorobots would overcome this problem by reprogramming selectively to destroy the new pathogens. Nanorobots are predicted to be part of the future of human medicine.


• SPRAY-ON NANO COMPUTERS

Consider that research is being done at the Ediburgh University to create "spray-on computers the size of a grain of sand” that will transform information technology. The research team aims to achieve this goal within four years.
When these nanocomputers are sprayed on to the chests of coronary patients, the tiny cells record a patient’s health and transmit information back to a hospital computer. This would enable doctors to monitor heart patients who are living at home.

QUANTUM COMPUTERS

A quantum computer uses quantum mechanical phenomena, such as entanglement and
superposition to process data. Quantum computation aims to use the quantum properties of particles to represent and structure data. Quantum mechanics is used to understand how to perform operations with this data. The quantum mechanical properties of atoms or nuclei allow these particles to work together as quantum bits, or qubits. These qubits work together to form the computer's processor and memory. Qubits can interact with each other while being isolated from the external environment and this enables them to perform certain calculations much faster than conventional computers.

By computing many different numbers simultaneously and then interfering the results to get a single answer, a quantum computer can perform a large number of operations in parallel and ends up being much more powerful than a digital computer of the same size.
"In the tiny spaces inside atoms, the ordinary rules of reality ... no longer hold. Defying all common sense, a single particle can be in two places at the same time. And so, while a switch in a conventional computer can be either on or off, representing 1 or 0, a quantum switch can paradoxically be in both states at the same time, saying 1 and 0.... Therein lies the source of the power." Whereas three ordinary switches could store any one of eight patterns, three quantum switches can hold all eight at once, taking "a shortcut through time." [Scientific America.com]

Quantum computers could prove to be useful for running simulations of quantum mechanics. This would benefit the fields of physics, chemistry, materials science, nanotechnology, biology and medicine because currently, advancement in these fields is limited by the slow speed of quantum mechanical simulations.

Quantum computing is ideal for tasks such as cryptography, modeling and indexing very large databases. Many government and military funding agencies are supporting quantum computing research to develop quantum computers for civilian and national security purposes, such as cryptanalysis.

ARTIFICIAL INTELLIGENCE

The term “Artificial Intelligence” was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. It is a branch of computer science that aims to make computers behave like humans. [Artificial Intelligence] Artificial Intelligence includes programming computers to make decisions in real life situations (e.g. some of these “expert systems” help physicians in the diagnosis of diseases based on symptoms), programming computers to understand human languages (natural language), programming computers to play games such as chess and checkers (games playing), programming computers to hear, see and react to other sensory stimuli(robotics) and designing systems that mimic human intelligence by attempting to reproduce the types of physical connections between neurones in the human brain (neural networks).

Natural-language processing would allow ordinary people who don’t have any knowledge of programming languages to interact with computers.

So what does the future of computer technology look like after these developments?

Through nanotechnology, computing devices are becoming progressively smaller and more powerful. Everyday devices with embedded technology and connectivity are becoming a reality. Nanotechnology has led to the creation of increasingly smaller and faster computers that can be embedded into small devices.

This has led to the idea of pervasive computing which aims to integrate software and hardware into all man made and some natural products. It is predicted that almost any items such as clothing, tools, appliances, cars, homes, coffee mugs and the human body will be imbedded with chips that will connect the device to an infinite network of other devices. [Pervasive Computing]
Hence, in the future network technologies will be combined with wireless computing, voice recognition, Internet capability and artificial intelligence with an aim to create an environment where the connectivity of devices is embedded in such a way that the connectivity is not inconvenient or outwardly visible and is always available. In this way, computer technology will saturate almost every facet of our life. What seems like virtual reality at the moment will become the human reality in the future of computer technology.


N.VIVEGA

SHADIQ BASHA. A

RECENT INVENTIONS

Various recent inventions such as, Robot with human expressions, Mystery of Black Holes, 4G Technology, 3-D

Processor chips, Operating System that will take place of Windows, Evidences of water on Mars..and many other..

Some of the Technology Details Here:

1) 4G TECHNOLOGY:

Fourth generation communication technology is the next step for wireless communication system.

This 4G system is 200 times faster than present 2G mobile data rates and 10 times faster than present 3G broadband

mobile.The data rate for present 3G broadband mobile rate is 2 Mbit/sec, where as 2G mobile data rate is 9.6 kbit/sec.

4G DATA RATES AND ITS MERITS:

4G mobile data transmission rates are planned to be up to 20 megabits per second which means that it will be about

10-20 times faster than standard ASDL services.

The main objectives of 4G are:

1)4G will be a fully IP-based integrated system.

2)This will be capable of providing 100 Mbit/s and 1 Gbit/s speeds both indoors and outdoors.

3)It can provide premium quality and high security.

4)4G offer all types of services at an affordable cost.

4G technology allow high-quality smooth video transmission. It will enable fast downloading of full-length songs or

music pieces in real time.

2) NEW OPERATING SYTEM:

Windows is an world wide famous and realiable operating system for Computers.Microsoft launches its first widows

operating system on 1985.Till now Microsoft presented various kind of windows operating system like windows

2000,Xp,Vista ..But In 2010 Microsoft totally going to change these concept by launching Cloud Based Operating

System and rumors are there that MIDORI will be their first such operating system, which will replace Windows fully

from computer map.

The main idea behind MIDORI is to develop a lightweight portable OS which can be mated easily to lots of various

applications.

3) 3D PROCESSOR CHIP:

Scientist at University of Rochesterb have developed a new generation of Computer Processors. These processors are

based on 3-Dimensional Circuits in contrary to 2-Dimensional Circuits of today.

This can be said as the next major advance in computer processors technology. The latest 3-D processor is running at

1.4 gigahertz in the labs of University.

This design means that every tasks such as Synchronicity, Power Distribution, and Long-Distance Signaling are all fully

functioning in three dimensions for the first time.

Wednesday, July 21, 2010

Jaya S

META-SEARCH ENGINE


A meta-search engine is a search tool that sends user requests to several other search engines and/or databases and aggregates the results into a single list or displays them according to their source. Metasearch engines enable users to enter search criteria once and access several search engines simultaneously. Metasearch engines operate on the premise that the Web is too large for any one search engine to index it all and that more comprehensive search results can be obtained by combining the results from several search engines. This also may save the user from having to use multiple search engines separately.


A meta search engine searches multiple search engines from a single search page.



Meta search engines work in various ways. With some, a single, simultaneous search retrieves results from multiple sources, usually with the duplicates removed. Others offer a separate search of multiple content sources, allowing you to select the source(s) you want for each search.


When a single simultanous search is offered, only a limited maximum muber of pages from each source are returned. The cut-off may be determined by the number of pages retrieved, or by the amount of time the meta engine spends at the other sites. Results retrieved by these engines can be highly relevant, since they are usually grabbing the first items from the relevancy-ranked list of results returned by the individual search engines. Keep in mind that complex searches, such as field searches, are usually not available.


Operation:

Metasearch engines create what is known as a virtual database. They do not compile a physical database or catalogue of the web. Instead, they take a user's request, pass it to several other heterogeneous databases and then compile the results in a homogeneous manner based on a specific algorithm.

Definition of Meta Search Engine:

A meta search engine (also known as a multi-threaded engine) is a search tool that sends your query simultaneously to several search engines (SEs), Web directories (WDs) and sometimes to the so-called Invisible (Deep) Web, a collection of online information not indexed by traditional search engines. After collecting the results, the meta search engine (MSE) will remove the duplicate links and, according to its algorithm, combine/rank the results into a single merged list.

An important note:

Unlike the individual search engines and directories, the meta search engines:
1. Do not have their own databases and
2. Do not accept URL submissions.

Pros and Cons of Meta Search Engines:

Pros: MSEs save searchers a considerable amount of time by sparing them the trouble of running a query in each search engine. The results - most of the time - are extremely relevant.

Cons: Because some SEs or WDs do not support advanced searching techniques such as quotation marks to enclose phrases or Boolean operators, no (or irrelevant) results from those SEs will appear in the MSEs results list when those techniques are used.

When to use a meta search engine?

* When you want to retrieve a relatively small number of relevant results

* When your topic is obscure

* When you are not having luck finding what you want

* When you want the convenience of searching a variety of different content sources from one search page

Examples of meta search engines:

* Browsys

* Dogpile