Showing posts with label computer. Show all posts
Showing posts with label computer. Show all posts

23 October 2014

How does your computer know how much ink is left in the cartridge?

If you bought your ink by the gallon instead of tiny amounts, you might perish from sticker shock.
©Piotr Adamowicz/iStock/Thinkstoc

Inkjet printer ink is crazy expensive. Depending on the make and model of your printer, you could easily drop $100 or more for a new round of cartridges – all so that you can continue using a printer that may have only cost you $89. So when you want to maximize the number of printouts you make with that pricey ink, you may find yourself wondering exactly how the printer knows each cartridge is about to run dry.
Before we delve into specifics, it's worth knowing that manufacturers purposely program their printers to stop using cartridges that are getting low on ink. That's because if cartridges were to run totally dry, the plastic cartridge may become too hot and eventually damage or destroy your printer's printhead. In other words, you'd be out a printer instead of just ink.
That said, considering the price of ink, you have a vested interest in squeezing every last drop of the stuff out of each cartridge. Ink may cost anywhere from $13 to $75 for a single ounce. That's -- cough -- nearly $10,000 per gallon [source: Consumer Reports].
Ink is exorbitantly priced in part because printer manufacturers are giving away their sophisticated printers at a really low price in the short term, knowing that they'll make their real profits on ink in the long term.
All of which leads us to this: If ink is such a fabulous cash cow for printer developers, they'd clearly have a reason to fudge on low-ink reminders. After all, if you unknowingly replace cartridges when they still have a usable level of ink inside them, the companies that sell the ink will wind up with significantly higher revenues.
But these companies aren't necessarily out to get you. On the next page you'll read more about low-ink reminders and how you can monitor whether you're getting your money's worth.

If your cartridges are clear or translucent, you can get a visual on your ink situation.
©Sergey Yakovlev/Hemera/Thinkstock

Low-Ink Tech

So how exactly does your printer know that a cartridge is getting low on ink? Different manufacturers use different technologies for this process.
Epson's cartridges are equipped with an integrated circuit chip. This chip tells the printer whether the correct cartridge is installed and also helps the printer keep a record of how much ink each specific cartridge has spewed. Once a cartridge approaches the low-ink threshold, the chip sends an alert to your computer and you see a message on your screen.
Canon takes a different approach. Each printer uses an optical sensor in which shines a light through a prism at the bottom of the ink well. Once ink levels fall to a predetermined level, a beam of light bounces towards a low-ink sensor, which again triggers an on-screen message that tells you to replace the cartridge.
Some other printer makers build the printhead directly onto the cartridge, so there's no risk of permanently damaging the printer once ink runs low. These use a chip that's similar to the Epson models. But as part of the system, some of these printers obstinately refuse to print more pages even if ink remains inside, meaning you've no choice but to toss perfectly good ink.
Of course, the big questions is just how accurate are these systems, really? Journalists and industry insiders offer varying accounts on ink yield, but the consensus seems to be that manufacturers err heavily on the side of cushioning low-ink alerts. That is, they'd much rather have you toss out a cartridge with ink left than print for weeks or months longer before spending more cash on new ones. One study indicated that nearly 60 percent of ink goes unused and is thrown away [source: Haworth].
If ink costs concern you, your best bet is to do a bit of research before you buy a printer. In general, the cheaper the printer, the more expensive the ink. Spend a bit more on the printer itself and your ink costs will likely decrease [source: Wood].
Also consider leaving your printer's power on. Each time you cycle the power on an inkjet, it goes through a maintenance routine that can use a huge percentage of each cartridge's ink [source: Consumer Reports].
Print only when you need to and leave the printer on, and you'll get the most mileage out of each cartridge. Hopefully, you'll save a bit of cash, as well.
 

Read More......

22 April 2008

How Caching Works


If you have been shopping for a computer, then you have heard the word "cache." Modern computers have both L1 and L2 caches, and many now also have L3 cache. You may also have gotten advice on the topic from well-meaning friends, perhaps something like "Don't buy that Celeron chip, it doesn't have any cache in it!"

It turns out that caching is an important computer-science process that appears on every computer in a variety of forms. There are memory caches, hardware and software disk caches, page caches and more. Virtual memory is even a form of caching. In this article, we will explore caching so you can understand why it is so important.


A Simple Example: Before Cache

Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly.

To understand the basic idea behind a cache system, let's start with a super-simple example that uses a librarian to demonstrate caching concepts. Let's imagine a librarian behind his desk. He is there to give you the books you ask for. For the sake of simplicity, let's say you can't get the books yourself -- you have to ask the librarian for any book you want to read, and he fetches it for you from a set of stacks in a storeroom (the library of congress in Washington, D.C., is set up this way). First, let's start with a librarian without cache.

The first customer arrives. He asks for the book Moby Dick. The librarian goes into the storeroom, gets the book, returns to the counter and gives the book to the customer. Later, the client comes back to return the book. The librarian takes the book and returns it to the storeroom. He then returns to his counter waiting for another customer. Let's say the next customer asks for Moby Dick (you saw it coming...). The librarian then has to return to the storeroom to get the book he recently handled and give it to the client. Under this model, the librarian has to make a complete round trip to fetch every book -- even very popular ones that are requested frequently. Is there a way to improve the performance of the librarian?

Yes, there's a way -- we can put a cache on the librarian. In the next section, we'll look at this same example but this time, the librarian will use a caching system.


A Simple Example: After Cache

Let's give the librarian a backpack into which he will be able to store 10 books (in computer terms, the librarian now has a 10-book cache). In this backpack, he will put the books the clients return to him, up to a maximum of 10. Let's use the prior example, but now with our new-and-improved caching librarian.

The day starts. The backpack of the librarian is empty. Our first client arrives and asks for Moby Dick. No magic here -- the librarian has to go to the storeroom to get the book. He gives it to the client. Later, the client returns and gives the book back to the librarian. Instead of returning to the storeroom to return the book, the librarian puts the book in his backpack and stands there (he checks first to see if the bag is full -- more on that later). Another client arrives and asks for Moby Dick. Before going to the storeroom, the librarian checks to see if this title is in his backpack. He finds it! All he has to do is take the book from the backpack and give it to the client. There's no journey into the storeroom, so the client is served more efficiently.

What if the client asked for a title not in the cache (the backpack)? In this case, the librarian is less efficient with a cache than without one, because the librarian takes the time to look for the book in his backpack first. One of the challenges of cache design is to minimize the impact of cache searches, and modern hardware has reduced this time delay to practically zero. Even in our simple librarian example, the latency time (the waiting time) of searching the cache is so small compared to the time to walk back to the storeroom that it is irrelevant. The cache is small (10 books), and the time it takes to notice a miss is only a tiny fraction of the time that a journey to the storeroom takes.

From this example you can see several important facts about caching:

  • Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type.

  • When using a cache, you must check the cache to see if an item is in there. If it is there, it's called a cache hit. If not, it is called a cache miss and the computer must wait for a round trip from the larger, slower memory area.

  • A cache has some maximum size that is much smaller than the larger storage area.

  • It is possible to have multiple layers of cache. With our librarian example, the smaller but faster memory type is the backpack, and the storeroom represents the larger and slower memory type. This is a one-level cache. There might be another layer of cache consisting of a shelf that can hold 100 books behind the counter. The librarian can check the backpack, then the shelf and then the storeroom. This would be a two-level cache.

Computer Caches

A computer is a machine in which we measure time in very small increments. When the microprocessor accesses the main memory (RAM), it does it in about 60 nanoseconds (60 billionths of a second). That's pretty fast, but it is much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity.

What if we build a special memory bank in the motherboard, small but very fast (around 30 nanoseconds)? That's already two times faster than the main memory access. That's called a level 2 cache or an L2 cache. What if we build an even smaller but faster memory system directly into the microprocessor's chip? That way, this memory will be accessed at the speed of the microprocessor and not the speed of the memory bus. That's an L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2 cache, which is two times faster than the access to main memory.

Some microprocessors have two levels of cache built right into the chip. In this case, the motherboard cache -- the cache that exists between the microprocessor and main system memory -- becomes level 3, or L3 cache.

There are a lot of subsystems in a computer; you can put cache between many of them to improve performance. Here's an example. We have the microprocessor (the fastest thing in the computer). Then there's the L1 cache that caches the L2 cache that caches the main memory which can be used (and is often used) as a cache for even slower peripherals like hard disks and CD-ROMs. The hard disks are also used to cache an even slower medium -- your Internet connection.


Caching Subsystems

Your Internet connection is the slowest link in your computer. So your browser (Internet Explorer, Netscape, Opera, etc.) uses the hard disk to store HTML pages, putting them into a special folder on your disk. The first time you ask for an HTML page, your browser renders it and a copy of it is also stored on your disk. The next time you request access to this page, your browser checks if the date of the file on the Internet is newer than the one cached. If the date is the same, your browser uses the one on your hard disk instead of downloading it from Internet. In this case, the smaller but faster memory system is your hard disk and the larger and slower one is the Internet.

Cache can also be built directly on peripherals. Modern hard disks come with fast memory, around 512 kilobytes, hardwired to the hard disk. The computer doesn't directly use this memory -- the hard-disk controller does. For the computer, these memory chips are the disk itself. When the computer asks for data from the hard disk, the hard-disk controller checks into this memory before moving the mechanical parts of the hard disk (which is very slow compared to memory). If it finds the data that the computer asked for in the cache, it will return the data stored in the cache without actually accessing data on the disk itself, saving a lot of time.

Here's an experiment you can try. Your computer caches your floppy drive with main memory, and you can actually see it happening. Access a large file from your floppy -- for example, open a 300-kilobyte text file in a text editor. The first time, you will see the light on your floppy turning on, and you will wait. The floppy disk is extremely slow, so it will take 20 seconds to load the file. Now, close the editor and open the same file again. The second time (don't wait 30 minutes or do a lot of disk access between the two tries) you won't see the light turning on, and you won't wait. The operating system checked into its memory cache for the floppy disk and found what it was looking for. So instead of waiting 20 seconds, the data was found in a memory subsystem much faster than when you first tried it (one access to the floppy disk takes 120 milliseconds, while one access to the main memory takes around 60 nanoseconds -- that's a lot faster). You could have run the same test on your hard disk, but it's more evident on the floppy drive because it's so slow.

To give you the big picture of it all, here's a list of a normal caching system:

  • L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4 kilobytes to 16 kilobytes in size)
  • L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128 kilobytes to 512 kilobytes in size)
  • Main memory - Memory access of type RAM (around 60 nanoseconds, 32 megabytes to 128 megabytes in size)
  • Hard disk - Mechanical, slow (around 12 milliseconds, 1 gigabyte to 10 gigabytes in size)
  • Internet - Incredibly slow (between 1 second and 3 days, unlimited size)
As you can see, the L1 cache caches the L2 cache, which caches the main memory, which can be used to cache the disk subsystems, and so on.

Cache Technology

One common question asked at this point is, "Why not make all of the computer's memory run at the same speed as the L1 cache, so no caching would be required?" That would work, but it would be incredibly expensive. The idea behind caching is to use a small amount of expensive memory to speed up a large amount of slower, less-expensive memory.

In designing a computer, the goal is to allow the microprocessor to run at its full speed as inexpensively as possible. A 500-MHz chip goes through 500 million cycles in one second (one cycle every two nanoseconds). Without L1 and L2 caches, an access to the main memory takes 60 nanoseconds, or about 30 wasted cycles accessing memory.

When you think about it, it is kind of incredible that such relatively tiny amounts of memory can maximize the use of much larger amounts of memory. Think about a 256-kilobyte L2 cache that caches 64 megabytes of RAM. In this case, 256,000 bytes efficiently caches 64,000,000 bytes. Why does that work?

In computer science, we have a theoretical concept called locality of reference. It means that in a fairly large program, only small portions are ever used at any one time. As strange as it may seem, locality of reference works for the huge majority of programs. Even if the executable is 10 megabytes in size, only a handful of bytes from that program are in use at any one time, and their rate of repetition is very high. On the next page, you'll learn more about locality of reference.


Locality of Reference

Let's take a look at the following pseudo-code to see why locality of reference works (see How C Programming Works to really get into it):
Output to screen « Enter a number  between 1 and 100 »
Read input from user
Put value from user in variable X
Put value 100 in variable Y
Put value 1 in variable Z
Loop Y number of time
Divide Z by X
If the remainder of the division = 0
then output « Z is a multiple of X »
Add 1 to Z
Return to loop
End
This small program asks the user to enter a number between 1 and 100. It reads the value entered by the user. Then, the program divides every number between 1 and 100 by the number entered by the user. It checks if the remainder is 0 (modulo division). If so, the program outputs "Z is a multiple of X" (for example, 12 is a multiple of 6), for every number between 1 and 100. Then the program ends.

Even if you don't know much about computer programming, it is easy to understand that in the 11 lines of this program, the loop part (lines 7 to 9) are executed 100 times. All of the other lines are executed only once. Lines 7 to 9 will run significantly faster because of caching.

This program is very small and can easily fit entirely in the smallest of L1 caches, but let's say this program is huge. The result remains the same. When you program, a lot of action takes place inside loops. A word processor spends 95 percent of the time waiting for your input and displaying it on the screen. This part of the word-processor program is in the cache.

This 95%-to-5% ratio (approximately) is what we call the locality of reference, and it's why a cache works so efficiently. This is also why such a small cache can efficiently cache such a large memory system. You can see why it's not worth it to construct a computer with the fastest memory everywhere. We can deliver 95 percent of this effectiveness for a fraction of the cost.

Read More......

05 January 2008

do you know PCI Express Works ?

Introduction to How PCI Express Works

Peripheral Component Interconnect (PCI) slots are such an integral part of a computer's architecture that most people take them for granted. For years, PCI has been a versatile, functional way to connect sound, video and network cards to a motherboard.

But PCI has some shortcomings. As processors, video cards, sound cards and networks have gotten faster and more powerful, PCI has stayed the same. It has a fixed width of 32 bits and can handle only 5 devices at a time. The newer, 64-bit PCI-X bus provides more bandwidth, but its greater width compounds some of PCI's other issues.


A new protocol called PCI Express (PCIe) eliminates a lot of these shortcomings, provides more bandwidth and is compatible with existing operating systems. In this article, we'll examine what makes PCIe different from PCI. We'll also look at how PCI Express makes a computer faster, can potentially add graphics performance, and can replace the AGP slot.

PCI Express card
Photo courtesy Consumer Guide Products

Thank You
Thanks to Joshua Senecal for his assistance with this article.

High-Speed Serial Connection
In the early days of computing, a vast amount of data moved over serial connections. Computers separated data into packets and then moved the packets from one place to another one at a time. Serial connections were reliable but slow, so manufacturers began using parallel connections to send multiple pieces of data simultaneously.

It turns out that parallel connections have their own problems as speeds get higher and higher -- for example, wires can interfere with each other electromagnetically -- so now the pendulum is swinging back toward highly-optimized serial connections. Improvements to hardware and to the process of dividing, labeling and reassembling packets have led to much faster serial connections, such as USB 2.0 and FireWire.

Sizing Up
Smaller PCIe cards will fit into larger PCIe slots. The computer simply ignores the extra connections. For example, a x4 card can plug into a x16 slot. A x16 card, however, would be too big for a x4 slot.
PCI Express is a serial connection that operates more like a network than a bus. Instead of one bus that handles data from multiple sources, PCIe has a switch that controls several point-to-point serial connections. (See How LAN Switches Work for details.) These connections fan out from the switch, leading directly to the devices where the data needs to go. Every device has its own dedicated connection, so devices no longer share bandwidth like they do on a normal bus. We'll look at how this happens in the next section.

PCI Express Lanes

When the computer starts up, PCIe determines which devices are plugged into the motherboard. It then identifies the links between the devices, creating a map of where traffic will go and negotiating the width of each link. This identification of devices and connections is the same protocol PCI uses, so PCIe does not require any changes to software or operating systems.

PCI Express links and lanes

Each lane of a PCI Express connection contains two pairs of wires -- one to send and one to receive. Packets of data move across the lane at a rate of one bit per cycle. A x1 connection, the smallest PCIe connection, has one lane made up of four wires. It carries one bit per cycle in each direction. A x2 link contains eight wires and transmits two bits at once, a x4 link transmits four bits, and so on. Other configurations are x12, x16 and x32.

PCI Express slots
Scalable PCI Express slots.

PCI Express is available for desktop and laptop PCs. Its use may lead to lower cost of motherboard production, since its connections contain fewer pins than PCI connections do. It also has the potential to support many devices, including Ethernet cards, USB 2 and video cards.

Two by Two
The "x" in an "x16" connection stands for "by." PCIe connections are scalable by one, by two, by four, and so on.

But how can one serial connection be faster than the 32 wires of PCI or the 64 wires of PCIx? In the next section, we'll look at how PCIe is able to provide a vast amount of bandwidth in a serial format.

PCI Express Connection Speeds

The 32-bit PCI bus has a maximum speed of 33 MHz, which allows a maximum of 133 MB of data to pass through the bus per second. The 64-bit PCI-X bus has twice the bus width of PCI. Different PCI-X specifications allow different rates of data transfer, anywhere from 512 MB to 1 GB of data per second.

PCI express vs. PC comparison
Devices using PCI share a common bus, but each device using PCI Express has its own dedicated connection to the switch.

A single PCI Express lane, however, can handle 200 MB of traffic in each direction per second. A x16 PCIe connector can move an amazing 6.4 GB of data per second in each direction. At these speeds, a x1 connection can easily handle a gigabit Ethernet connection as well as audio and storage applications. A x16 connection can easily handle powerful graphics adapters.

How is this possible? A few simple advances have contributed to this massive jump in serial connection speed:

  • Prioritization of data, which allows the system to move the most important data first and helps prevent bottlenecks

  • Time-dependent (real-time) data transfers

  • Improvements in the physical materials used to make the connections

  • Better handshaking and error detection

  • Better methods for breaking data into packets and putting the packets together again. Also, since each device has its own dedicated, point-to-point connection to the switch, signals from multiple sources no longer have to work their way through the same bus.

Slowing the Bus
Interference and signal degradation are common in parallel connections. Poor materials and crossover signal from nearby wires translate into noise, which slows the connection down. The additional bandwidth of the PCI-X bus means it can carry more data that can generate even more noise. The PCI protocol also does not prioritize data, so more important data can get caught in the bottleneck. Using the Accelerated Graphics Port (AGP) slot for video cards removes a substantial amount of traffic, but not enough to compensate for faster processors and I/O devices.

PCI Express and Advanced Graphics

We've established that PCIe can eliminate the need for an AGP connection. A x16 PCIe slot can accommodate far more data per second than current AGP 8x connections allow. In addition, a x16 PCIe slot can supply 75 watts of power to the video card, as opposed to the 25watt/42 watt AGP 8x connection. But PCIe has even more impressive potential in store for the future of graphics technology.

PCI express video card
Photo courtesy Consumer Guide Products
PCI Express video card
AGP 8x video card
Photo courtesy Consumer Guide Products
AGP 8x video card

With the right hardware, a motherboard with two x16 PCIe connections can support two graphics adapters at the same time. Several manufacturers are developing and releasing systems to take advantage of this feature:

  • NVIDIA Scalable Link Interface (SLI): With an SLI-certified motherboard, two SLI graphics cards and an SLI connector, a user can put two video cards into the same system. The cards work together by splitting the screen in half. Each card controls half of the screen, and the connector makes sure that everything stays synchronized.

    NVIDIA SLI link card
    Photo courtesy NVIDIA
    NVIDIA SLI link card

  • ATI CrossFire: Two ATI Radeon® video cards, one with a "compositing engine" chip, plug into a compatible motherboard. ATI's technology focuses on image quality and does not require identical video cards, although high-performance systems must have identical cards. Crossfire divides up the work of rendering in one of three ways:

    • splitting the screen in half and assigning one half to each card (called "scissoring")
    • dividing up the screen into tiles (like a checkerboard) and having one card render the "white" tiles and the other render the "black" tiles
    • having each card render alternate frames

  • Alienware Video Array: Two off-the-shelf video cards combine with a Video Merger Hub and proprietary software. This system will use specialized cooling and power systems to handle all the extra heat and energy from the video cards. Alienware's technology may eventually support as many as four video cards.

Two video cards running parallel
Photo courtesy NVIDIA
Two video cards running parallel

Since PCI, PCI-X and PCI Express are all compatible, all three can coexist indefinitely. So far, video cards have made the fastest transition to the PCIe format. Network and sound adapters, as well as other peripherals, have been slower in development. But since PCIe is compatible with current operating systems and can provide faster speeds, it is likely that it will eventually replace PCI as a PC standard. Gradually, PCI-based cards will become obsolete.

For more information about PCI Express and related topics, check out the links on the next page.


Read More......

30 November 2007

How Bluetooth Works

Technorati Profile

Introduction to How Bluetooth Works


Photo courtesy DealTime
Jabra FreeSpeak BT250 Bluetooth headset.
Check out our mobile
technology image gallery.
There are lots of different ways that electronic devices can connect to one another. For example:
  • Component cables
  • Electrical wires
  • Ethernet cables
  • WiFi
  • Infrared signals

When you use computers, entertainment systems or telephones, the various pieces and parts of the systems make up a community of electronic devices. These devices communicate with each other using a variety of wires, cables, radio signals and infrared light beams, and an even greater variety of connectors, plugs and protocols.

The art of connecting things is becoming more and more complex every day. In this article, we will look at a method of connecting devices, called Bluetooth, that can streamline the process. A Bluetooth connection is wireless and automatic, and it has a number of interesting features that can simplify our daily lives.

How Bluetooth Creates a Connection

Bluetooth takes small-area networking to the next level by removing the need for user intervention and keeping transmission power extremely low to save battery power. Picture this: You're on your Bluetooth-enabled cell phone, standing outside the door to your house. You tell the person on the other end of the line to call you back in five minutes so you can get in the house and put your stuff away. As soon as you walk in the house, the map you received on your cell phone from your car's Bluetooth-enabled GPS system is automatically sent to your Bluetooth-enabled computer, because your cell phone picked up a Bluetooth signal from your PC and automatically sent the data you designated for transfer. Five minutes later, when your friend calls you back, your Bluetooth-enabled home phone rings instead of your cell phone. The person called the same number, but your home phone picked up the Bluetooth signal from your cell phone and automatically re-routed the call because it realized you were home. And each transmission signal to and from your cell phone consumes just 1 milliwatt of power, so your cell phone charge is virtually unaffected by all of this activity.

Bluetooth is essentially a networking standard that works at two levels:

  • It provides agreement at the physical level -- Bluetooth is a radio-frequency standard.

  • It provides agreement at the protocol level, where products have to agree on when bits are sent, how many will be sent at a time, and how the parties in a conversation can be sure that the message received is the same as the message sent.


Photo courtesy Bluetooth SIG
Bluetooth wireless PC card

The big draws of Bluetooth are that it is wireless, inexpensive and automatic. There are other ways to get around using wires, including infrared communication. Infrared (IR) refers to light waves of a lower frequency than human eyes can receive and interpret. Infrared is used in most television remote control systems. Infrared communications are fairly reliable and don't cost very much to build into a device, but there are a couple of drawbacks. First, infrared is a "line of sight" technology. For example, you have to point the remote control at the television or DVD player to make things happen. The second drawback is that infrared is almost always a "one to one" technology. You can send data between your desktop computer and your laptop computer, but not your laptop computer and your PDA at the same time. (See How Remote Controls Work to learn more about infrared communication.)

These two qualities of infrared are actually advantageous in some regards. Because infrared transmitters and receivers have to be lined up with each other, interference between devices is uncommon. The one-to-one nature of infrared communications is useful in that you can make sure a message goes only to the intended recipient, even in a room full of infrared receivers.

Bluetooth is intended to get around the problems that come with infrared systems. The older Bluetooth 1.0 standard has a maximum transfer speed of 1 megabit per second (Mbps), while Bluetooth 2.0 can manage up to 3 Mbps. Bluetooth 2.0 is backward-compatible with 1.0 devices.

Let's find out how Bluetooth networking works.

How Bluetooth Operates

Bluetooth networking transmits data via low-power radio waves. It communicates on a frequency of 2.45 gigahertz (actually between 2.402 GHz and 2.480 GHz, to be exact). This frequency band has been set aside by international agreement for the use of industrial, scientific and medical devices (ISM).

A number of devices that you may already use take advantage of this same radio-frequency band. Baby monitors, garage-door openers and the newest generation of cordless phones all make use of frequencies in the ISM band. Making sure that Bluetooth and these other devices don't interfere with one another has been a crucial part of the design process.

One of the ways Bluetooth devices avoid interfering with other systems is by sending out very weak signals of about 1 milliwatt. By comparison, the most powerful cell phones can transmit a signal of 3 watts. The low power limits the range of a Bluetooth device to about 10 meters (32 feet), cutting the chances of interference between your computer system and your portable telephone or television. Even with the low power, Bluetooth doesn't require line of sight between communicating devices. The walls in your house won't stop a Bluetooth signal, making the standard useful for controlling several devices in different rooms.

Bluetooth can connect up to eight devices simultaneously. With all of those devices in the same 10-meter (32-foot) radius, you might think they'd interfere with one another, but it's unlikely. Bluetooth uses a technique called spread-spectrum frequency hopping that makes it rare for more than one device to be transmitting on the same frequency at the same time. In this technique, a device will use 79 individual, randomly chosen frequencies within a designated range, changing from one to another on a regular basis. In the case of Bluetooth, the transmitters change frequencies 1,600 times every second, meaning that more devices can make full use of a limited slice of the radio spectrum. Since every Bluetooth transmitter uses spread-spectrum transmitting automatically, it’s unlikely that two transmitters will be on the same frequency at the same time. This same technique minimizes the risk that portable phones or baby monitors will disrupt Bluetooth devices, since any interference on a particular frequency will last only a tiny fraction of a second.

When Bluetooth-capable devices come within range of one another, an electronic conversation takes place to determine whether they have data to share or whether one needs to control the other. The user doesn't have to press a button or give a command -- the electronic conversation happens automatically. Once the conversation has occurred, the devices -- whether they're part of a computer system or a stereo -- form a network. Bluetooth systems create a personal-area network (PAN), or piconet, that may fill a room or may encompass no more distance than that between the cell phone on a belt-clip and the headset on your head. Once a piconet is established, the members randomly hop frequencies in unison so they stay in touch with one another and avoid other piconets that may be operating in the same room. Let's check out an example of a Bluetooth-connected system.

Bluetooth Piconets

Let’s say you have a typical modern living room with typical modern stuff inside. There’s an entertainment system with a stereo, a DVD player, a satellite TV receiver and a television; there's also a cordless telephone and a personal computer. Each of these systems uses Bluetooth, and each forms its own piconet to talk between the main unit and peripheral.

The cordless telephone has one Bluetooth transmitter in the base and another in the handset. The manufacturer has programmed each unit with an address that falls into a range of addresses it has established for a particular type of device. When the base is first turned on, it sends radio signals asking for a response from any units with an address in a particular range. Since the handset has an address in the range, it responds, and a tiny network is formed. Now, even if one of these devices should receive a signal from another system, it will ignore it since it’s not from within the network. The computer and entertainment system go through similar routines, establishing networks among addresses in ranges established by manufacturers. Once the networks are established, the systems begin talking among themselves. Each piconet hops randomly through the available frequencies, so all of the piconets are completely separated from one another.

Now the living room has three separate networks established, each one made up of devices that know the address of transmitters it should listen to and the address of receivers it should talk to. Since each network is changing the frequency of its operation thousands of times a second, it’s unlikely that any two networks will be on the same frequency at the same time. If it turns out that they are, then the resulting confusion will only cover a tiny fraction of a second, and software designed to correct for such errors weeds out the confusing information and gets on with the network’s business.

Flexible Transmission
Most of the time, a network or communications method either works in one direction at a time, called half-duplex communication, or in both directions simultaneously, called full-duplex communication. A speakerphone that lets you either listen or talk, but not both, is an example of half-duplex communication, while a regular telephone handset is a full-duplex device. Because Bluetooth is designed to work in a number of different circumstances, it can be either half-duplex or full-duplex.

The cordless telephone is an example of a use that will call for a full-duplex (two-way) link, and Bluetooth can send data at more than 64 kilobits per second (Kbps) in a full-duplex link -- a rate high enough to support several voice conversations. If a particular use calls for a half-duplex link -- connecting to a computer printer, for example -- Bluetooth can transmit up to 721 Kbps in one direction, with 57.6 Kbps in the other. If the use calls for the same speed in both directions, Bluetooth can establish a link with 432.6-Kbps capacity in each direction.

Bluetooth Security

In any wireless networking setup, security is a concern. Devices can easily grab radio waves out of the air, so people who send sensitive information over a wireless connection need to take precautions to make sure those signals aren't intercepted. Bluetooth technology is no different -- it's wireless and therefore susceptible to spying and remote access, just like WiFi is susceptible if the network isn't secure. With Bluetooth, though, the automatic nature of the connection, which is a huge benefit in terms of time and effort, is also a benefit to people looking to send you data without your permission.

Bluetooth offers several security modes, and device manufacturers determine which mode to include in a Bluetooth-enabled gadget. In almost all cases, Bluetooth users can establish "trusted devices" that can exchange data without asking permission. When any other device tries to establish a connection to the user's gadget, the user has to decide to allow it. Service-level security and device-level security work together to protect Bluetooth devices from unauthorized data transmission. Security methods include authorization and identification procedures that limit the use of Bluetooth services to the registered user and require that users make a conscious decision to open a file or accept a data transfer. As long as these measures are enabled on the user's phone or other device, unauthorized access is unlikely. A user can also simply switch his Bluetooth mode to "non-discoverable" and avoid connecting with other Bluetooth devices entirely. If a user makes use of the Bluetooth network primarily for synching devices at home, this might be a good way to avoid any chance of a security breach while in public.

Still, early cell-phone virus writers have taken advantage of Bluetooth's automated connection process to send out infected files. However, since most cell phones use a secure Bluetooth connection that requires authorization and authentication before accepting data from an unknown device, the infected file typically doesn't get very far. When the virus arrives in the user's cell phone, the user has to agree to open it and then agree to install it. This has, so far, stopped most cell-phone viruses from doing much damage. See How Cell-phone Viruses Work to learn more.

Other problems like "bluejacking," "bluebugging" and "Car Whisperer" have turned up as Bluetooth-specific security issues. Bluejacking involves Bluetooth users sending a business card (just a text message, really) to other Bluetooth users within a 10-meter (32-foot) radius. If the user doesn't realize what the message is, he might allow the contact to be added to his address book, and the contact can send him messages that might be automatically opened because they're coming from a known contact. Bluebugging is more of a problem, because it allows hackers to remotely access a user's phone and use its features, including placing calls and sending text messages, and the user doesn't realize it's happening. The Car Whisperer is a piece of software that allows hackers to send audio to and receive audio from a Bluetooth-enabled car stereo. Like a computer security hole, these vulnerabilities are an inevitable result of technological innovation, and device manufacturers are releasing firmware upgrades that address new problems as they arise.

To learn more about Bluetooth security issues and solutions, see Bluetooth.com: Wireless Security.


The World Blogger


Read More......