Our digital detectives uncover the truth behind some of tech’s most baffling questions.
By Alex Wawro, PCWorld Jul 23, 2012 6:30 am
Why does my Windows desktop look weird on my HDTV?
It can be terribly frustrating to run a cable from a good HDTV to a PC graphics board and get skewed proportions, faded colors, and slightly blurry images.
Your HDTV might not perform well as a monitor for several reasons. First, many HDTVs aren’t designed to display fine details (such as text and lines) at close distances, but to display smooth motion and vibrant colors from distances of up to 5 feet away. Some HDTVs are configurable for use as PC monitors, however, so check to see whether your HDTV has an ‘Alternate Monitor’ display setting, or something similar.
Second, the native resolution of your HDTV may not match your Windows desktop as configured. For example, if you hook up a 42-inch LG 42LH50 HDTV with a native resolution of 1920 by 1080 (1080p) to a PC running the widescreen version of the same resolution (1920 by 1200), everything on your desktop will look a bit stretched out and weird. For optimal picture quality, you must configure your PC to match your HDTV’s native resolution and disable any of your HDTV’s features that may interfere with the PC video signal (scaling, overscan, and the like) and thus degrade picture quality.
Third, an older graphics board driver may assume that an HDMI connection must be to a TV, and it may react by underscanning the image from your PC to fit it within the borders of the display, making your Windows desktop look squished on a big 1080p PC monitor. To correct this problem, open the control panel for your graphics board and disable the underscan feature.
Finally, using an analog VGA cable to connect your HDTV to your PC can allow noise or other interference to disrupt the signal coming from your PC, thereby worsening the screen’s image quality. For best results, use a short HDMI cable to connect your HDTV and your PC.
What is UEFI, and why should I care?
The Unified Extensible Firmware Interface is a new firmware interface specification that is designed to replace the familiar BIOS interface.
When you turn on your computer, UEFI’s firmware will run an inventory of the hardware installed on the system; after checking that everything is functioning properly, it will launch the operating system and turn control of the PC hardware over to the software (and to you). UEFI supports a wider range of chip architectures (including 32-bit and 64-bit processors like the ARM chips that will be in Windows 8 tablet PCs) than does BIOS, which is limited to running on 16-bit processors.
The new spec works very well, and nearly all UEFI firmware images include support for older BIOS services, so you should never have a problem upgrading from a motherboard flashed with BIOS to one flashed with UEFI.
What is ‘Free Public Wi-Fi,’ and why doesn’t it ever work?
The “Free Public Wi-Fi” network you see in various public places is the result of an old Windows XP bug that causes the OS to set up an ad hoc data-sharing network for connected PCs if it can’t connect to a trusted wireless network automatically. Windows names the ad hoc setup after its previous Wi-Fi network connection. Long ago, after failing to connect to a trusted network named “Free Public Wi-Fi,” a Windows PC created just such an ad hoc entity. Drawn by the word “Free,” local laptop users hoping to get online connected to the impotent ad hoc network, and thus began broadcasting (and perpetuating) it themselves.
Connecting to this type of device-to-device ad hoc network rarely poses any immediate danger, but it won’t get you onto the Web, either. And malicious users could spy on the connection and steal valuable information from you.
To ensure that your PC isn’t inadvertently broadcasting an impotent ad hoc network, update it to at least Windows XP with Service Pack 3. If you run Windows Vista, 7, or 8, you have nothing to worry about; nevertheless, avoid connecting to open Wi-Fi networks with names like “Free Public Wi-Fi” or “Default.”
Why can’t I upgrade from 32-bit Windows to 64-bit Windows?
Upgrading a copy of Windows 7 from 32-bit Home to 32-bit Professional is simple enough, but upgrading to a 64-bit version of the OS requires you to do a fresh installation.
Windows handles information differently depending on whether you use the 32-bit or 64-bit version. In extremely broad terms, a 64-bit operating system can process data in bigger chunks than a 32-bit system can. That’s why you can’t use the Windows Easy Transfer utility to move files and applications between 32-bit and 64-bit versions of Windows: The CPUs on the transferring and receiving machines use fundamentally different data architectures.
Moving from 32-bit to 64-bit Windows also entails upgrading to 64-bit versions of your applications–and most programs won’t work any better afterward. The main reasons to upgrade from 32-bit to 64-bit Windows are to take advantage of a 64-bit processor (which most modern PCs have) and to use more than 4GB of RAM. If you don’t have that much RAM, upgrading to 64-bit Windows may not be worth the hassle.
Why won’t my motherboard beep as my PC boots up?
Most motherboards emit a loud beep when you power on the PC to let you know that they’re working properly. If something goes wrong, they’ll emit a specific sequence of beeps that you can look up in your motherboard’s manual to troubleshoot the problem.
If your motherboard doesn’t emit any beeps, and your PC isn’t booting properly, the board may be dead. Verify that it is plugged in properly and try booting up again; if you still don’t hear any beeps and your PC doesn’t power on, you may need a new motherboard.
Alternatively, the motherboard may use on-screen notifications during the boot sequence, and LEDs on the motherboard itself, in place of beeps.
What is IPS, and why is it desirable?
When shopping for a new monitor, you may be puzzled about the difference between displays that employ Twisted Nematic panels and displays that contain newer In-Plane Switching panels.
IPS panels are superior to older TN panels because they can display images with a broader range of color and brightness values, and because they offer a wider range of good viewing angles than TN displays do. In addition, IPS panels don’t react as strongly to touch as TN displays (which tend to lighten and blur under pressure), making them ideal for use as touchscreens. Though IPS was initially developed in 1996, the technology is still not standard; thus, when you compare monitor models, you may see displays that offer TN panels and such IPS-panel variants as Super IPS and IPS Pro.
Most IPS variants offer improvements over the basic IPS-panel technology. Nevertheless, a few good reasons remain for preferring an older TN-panel display to an IPS-panel display. Because TN panels are cheaper to produce, monitors that incorporate them tend to cost less. Also, TN panels can achieve greater brightness levels than most IPS variants, and they have much faster refresh rates, which makes them better for stereoscopic 3D applications.
How can Thunderbolt make my PC perform better?
Thunderbolt is the fancy name for a new high-speed interface designed by Intel. You may have heard about it back in 2011 when Apple updated its MacBook Pro laptops to include Thunderbolt ports. Only now are we beginning to see Windows laptops equipped with Thunderbolt ports appearing on the market.
Thunderbolt ports are nice if you can get them, because they’re so much faster and more efficient at moving data between devices. The Thunderbolt interface combines the high-speed PCI Express interface and the DisplayPort interface into a single interface supporting a serial data stream that is easy to transmit over long distances. Since Thunderbolt can transmit data, audio, video, and power over a single cable, hardware manufacturers can reduce the number of cables and ports that they must provide for connecting to different devices. The technology allows you to daisy-chain up to seven Thunderbolt devices if you have enough cables and ports to do so. Just run a Thunderbolt cable from your PC to your external hard drive, from your external hard drive to your sound system, and thence into your monitor.
This arrangement works only if every Thunderbolt device on the chain can pass data along the chain–and that could be a problem, since Thunderbolt technology is still new, and few Thunderbolt-capable devices are on the market. (See “Use Speedy Thunderbolt Hardware for Faster Data Transfers” for examples of some currently available products.)
Thunderbolt ports are still rare enough that you’ll probably have to invest in some adapters if you want to start using this new technology right away; but you can already buy Thunderbolt adapters for common Mac standards such as FireWire, and we should start seeing USB and HDMI adapters shortly. Look for more Thunderbolt devices from Acer, Asus, and Lenovo before the end of 2012.
Fun fact: Thunderbolt was originally designed to transmit data by fiber optics, but almost all Thunderbolt cables to date use copper wires instead, to keep manufacturing costs low. As of our publication date, only one company, Sumitomo, sells Thunderbolt cables built around optical fiber. Although optical fiber can transmit data faster than copper and over longer distances, it’s also more expensive to produce. Contemporary copper-based Thunderbolt devices are still blazing fast and can transfer data at theoretical speeds of up to 10 gigabits per second, but when we’ll see more true fiber-optic Thunderbolt cables on the market remains a mystery.
Why is printer ink so expensive?
The short answer: Because printer manufacturers can get away with charging you that much. Without ink, your printer is just a big paperweight, and companies like Canon, HP, and Lexmark know it. That’s why they can afford to sell printers for under $100; they’re betting that most printer owners will continue to invest in ink cartridges that may cost $20 to $40 a pop over the course of several (or many) years. Third-party refilled or remanufactured ink cartridges may be a lower-cost alternative, but some are messy to install or deliver inferior print quality.
That’s because printer ink is surprisingly difficult to replicate. But perhaps we shouldn’t find this difficulty so surprising. Even consumer-grade printer ink is a technological marvel–capable of remaining fluid at extremely high temperatures, and then drying instantly on paper after being shot through a tiny nozzle at a speed of roughly 30 miles per hour. Good luck getting your ballpoint pen cartridges to match those requirements!
Of course, printer ink is less expensive if you buy a printer that matches your printing needs. Melissa Riofrio, PCWorld senior editor and printer aficionado, has spilled plenty of ink in the process of reporting on the printer industry over the years; her testing suggests that purchasing a more expensive printer ($200 or above) is usually a smart decision if you print more than 250 pages per month, as ink and toner replacements for expensive printers tend to cost less than replacement cartridges for cheaper models. However, if your print output is less than that, stick with the cheapest printer that meets your needs.
Why does iTunes lose track of where my music is?
If iTunes often forgets where your music is stored and displays a ‘File Not Found’ exclamation mark, the reason may be that you store your music on an external device iTunes can’t always connect to.
When I use iTunes to load music onto my iPod, I have to tell iTunes where I keep my files, by dragging them into the iTunes library. This creates a link for each song, informing iTunes of where to find it–but when I disconnect my portable hard drive, the links become invalid; so iTunes displays a ‘File Not Found’ error and asks me to reconfirm where the file is on my hard drive.
To stop this problem from happening, open the File menu in iTunes, navigate to the Library section, and select Organize Library…. From here, check the box next to Consolidate Files, and iTunes will create copies of every media file played via iTunes in the iTunes Media folder, which ought to be located on your PC’s internal storage. This method may cause your hard drive to fill up with media files in the long term, but it will eliminate those annoying ‘File Not Found’ errors.
Why does my new monitor randomly go blank when I plug in an HDMI cable?
This problem occasionally crops up when an HDMI cable fails to sync up correctly with your display port. Since HDMI is a digital interface, it incorporates a security measure called the High Definition Content Protection protocol that is designed to discourage illegal duplication of media transmitted via the cable.
When you plug an HDMI cable into your PC, the cable verifies that everything is legit by exchanging a sequence of numbers unique to that device with the port; this process is commonly referred to as handshaking. But if a device transmits an incorrect HDCP sequence, the HDMI cable will not work properly.
Though your monitor may be malfunctioning, the likelier reason why your display occasionally blacks out when you plug in an HDMI cable and then comes back on 5 to 10 seconds later is that your devices are completing the HDCP handshake. The same thing can happen when you bring your monitor back from sleep mode, as the HDCP handshake protocol must be completed every time your HDMI cable comes back up to full voltage.
HDMI devices occasionally fail the HDCP handshake due to a transmission error or a surge in voltage, so if your monitor blacks out and stays black when you plug in an HDMI cable, switch inputs to HDMI, or bring the monitor back from sleep mode, you’ll probably have to power-cycle your display or completely restart your PC.
Why does my OS hide some of my critical system files?
The goal is to make it harder for untrained users to modify or delete them, and thereby cause a system error. The precaution is sensible because most people shouldn’t tamper with those files (like config.sys).
If you’re curious to see what files and folders are hidden on your Windows PC, here’s how to peek behind the curtain.
Open Windows Explorer, and navigate to Tools, Folder Options under the File menu. Windows Vista and Windows 7 disable this menu by default; to make it appear, hold down the Alt key. After opening the Folder Options menu, select the View tab and click the Advanced Setting menu; then find the setting that controls how Windows handles hidden files and folders. Enable the Show Hidden Files, Folders and Drives option, and you should be able to see all of the hidden files and folders on your PC.
After finding the file you were looking for and making all necessary changes, consider disabling the option. Leaving your critical system files visible in Windows Explorer increases the chance that someone will accidentally move or delete one of them, leading to an even greater mystery when your PC suddenly fails to boot properly one day.
In what ways does USB 3.0 differ from USB 2.0?
USB 3.0 data transfers have a theoretical maximum speed of 5 gigabits per second, in contrast to the theoretical maximum speed of 460 megabits per second of USB 2.0. Though you probably won’t obtain 5-gbps transfers during daily use, you should find that you can move files significantly faster via USB 3.0 than via USB 2.0.
Transferring data between USB 3.0 devices is also more efficient, because USB 3.0 permits simultaneous data transfers in both directions; USB 2.0 devices can transmit data in just one direction at a time. Of course, to take advantage of these upgrades you’ll have to invest in new USB 3.0 devices and cables. Though USB 3.0 is backward-compatible and will work with all old USB 2.0 gear, you must buy a USB 3.0 cable if you want your new USB 3.0 devices to exchange data at full speed.
Why do I need administrator access for some tasks?
It’s a security precaution. Windows requires you to have administrator access in order to modify or delete files, if doing so might affect other people who use the computer. This usually isn’t a problem if you set up the PC yourself, since the primary account on any Windows machine is assigned administrator privileges by default; but if you need access to your PC’s administrator account without a password (if you bought the PC used, for example) you could be in a pickle.
Normally, gaining administrator access in Windows when you don’t know the password to the account entails either reinstalling Windows or using third-party software like the Offline NT & Password Editor to reset the password. PCWorld Contributing Editor Lincoln Spector has written about this issue extensively in his Answer Line column, and you can find his advice on using the Offline NT & Password Editor to gain administrator access.
Fun fact: A hidden administrator account on every Windows 7 PC has privileges that supersede any other user’s–and disables all User Account Controls by default. To see it, you must first log in to your Windows 7 PC with an account that has administrator access. From there, right-click the Command Prompt application in your Applications folder and select Run As Administrator. Once the command prompt is open, type net user administrator /active:yes and press Enter. If the command executes successfully, you should be able to exit the Command Prompt, log out of your Windows account, and see the now-visible Administrator account. To cloak it again, follow the same process, changing the Command Prompt entry to read net user administrator /active:no.
Why do I have to reactivate software after upgrading my motherboard?
Most programs link their serial key to the computer you install them on–and most software vendors tie your unique product key to your PC’s MAC address, which your ethernet adapter generates. Some apps remain tied to the serial volume number of your hard drive, but many vendors prefer to use the MAC address because it is unique to your PC and is easy to transmit to the vendor’s website when you register your software online. The ethernet adapter is usually part of the motherboard; when you upgrade the board, you obtain a new MAC address, and many of your applications must be reactivated.
To combat software piracy, some PC games require you to reactivate them and to verify your identity after a hardware change. Also, some game publishers now require players to be constantly connected to the publisher’s servers to play their games.
What is a DisplayPort connection, and how does it affect me?
If you recently purchased a new PC or monitor, you may be mystified by the “DP” or “DisplayPort” connection on your new device. This component is exactly what it sounds like: a digital multimedia interface for shuttling data between your PC and your display.
Introduced in 2008, DisplayPort is an open industry standard that companies such as Apple, HP, Intel, and Samsung support. But we have VGA, DVI, HDMI, and now Thunderbolt cables for connecting computers, tablets, and smartphones to monitors and HDTVs. So why should you use DisplayPort instead of a standard DVI cable?
In the first place, the technology is just plain better. DisplayPort can deliver display data to your monitor or HDTV more efficiently than DVI or VGA can, because it transfers signal data to your display in discrete packets rather than in a steady stream. Each data packet contains its own time stamp, which helps your devices assemble the data more easily into what they’re supposed to display on screen. The data-packet approach also reduces distortion and image degradation, and it allows developers to modify how DisplayPort transmits their data. DisplayPort cables can carry audio data as well as video data, and they’re compatible with most popular display interfaces, if you’re willing to purchase an adapter. Consequently you can hook up your new graphics card with DisplayPort connectors to an old VGA/DVI monitor, though most major manufacturers plan to phase out VGA and DVI in the next decade in favor of DisplayPort.
In the long term, this is good news for PC owners, in part because the DisplayPort standard is open and royalty-free (which should encourage competition in the market and lower prices). Also, DisplayPort connectors are more powerful and easier to hook up than DVI or VGA cables; instead of fiddling with tiny thumbscrews or worrying about bending a bunch of minuscule pins, you can quickly plug a (significantly smaller) DisplayPort cable into the back of your devices.
Why do certain apps always run at startup?
You may have told them to, or they may have assumed that you would have done so if you’d thought of it. Free programs such as iTunes, Spotify, and Steam are particularly aggressive about configuring themselves to launch automatically during startup; but some premium paid software suites (like Adobe’s) are similarly presumptuous.
Of course, having apps for input devices, antivirus protection, and other critical functions run at startup is a good thing, even if they cause Windows to boot a little slower.
The setting for telling a program when to start up is usually located in the application’s Preferences, Tools, or Options menu; consult your program manual for specific guidance.
If you can’t stop a program from launching at startup from within the program itself, edit the Windows startup sequence. Click the Start button and type msconfig in the search box. This should open your PC’s System Configuration utility, where you can change how your system boots and what applications and services it launches at startup. Select the Startup tab, scroll through the list of programs, and uncheck everything that you don’t want to start automatically. If the program you want to halt doesn’t appear in the System Configuration utility, don’t panic; PCWorld Senior Editor Loyd Case points out that a few applications use the Windows Task Scheduler to launch themselves during startup, so you’ll need to open the Windows Task Scheduler and disable them manually.
It’s a good idea to let programs that you don’t recognize (such as the Default Manager and Java Platform Updater) run automatically to keep your PC purring along smoothly. If you run into trouble, simply return to the System Configuration utility and let the troublesome application launch during startup.