My packaging efforts for Debian can be seen here. I wrote apt-offline (Offline Package Manager) when stuck with real-world IT policies. I currently maintain laptop-mode-tools, a tool that can assist your linux kernel in conserving more power.
While upgrading the memory (RAM) of my notebook I did a lot of study as to what exact kind of RAM would my notebook use without problems. By the end, while I finalized on what RAM I need for my notebook, I had compiled a small document on Basic RAM Information courtesy Crucial® FAQs.
I hope this guide to be helpful to you but with no warranty though. :-)
Do I have to buy the same size upgrade as the memory module currently installed in my computer or can I mix different sizes?
In newer systems using SDRAM or DDR SDRAM memory, you can use modules of different densities with no problem. For example, if your computer came with a 128MB memory module, you can add a 256MB module for a total of 384MB of RAM. However, if you have a "dual-channel" system and want to take advantage of that technology, you will need to ensure that the modules in each memory slot are the same density.
Can I mix and match speeds?
Rather than give memory modules catchy names, modules are referred to by their specifications. If you don't know a lot about memory, the numbers can be confusing. Here's a short summary of the most popular types of memory and what the numbers refer to.
DDR PC1600, PC2100, PC2700, and PC3200 (DDR400) In DDR modules, the numbers that come after the "PC" refer to the total bandwidth of the module. For this type of memory, a higher number represents faster memory, or more bandwidth. Occasionally DDR is referred to as "DDR400" or "DDR333," for example. When written this way, the numbers after "DDR" refer to the data transfer rate of the components.
PC1600 memory is DDR designed for use in systems with a 100-MHz front-side bus, (providing a 200 mega transfers per second (MT/s) data transfer rate). The "1600" refers to the module's bandwidth (the maximum amount of data it can transfer each second), which is 1.6 GB. PC1600 has been replaced by PC2100, which is backward compatible.
PC2100 memory is DDR designed for use in systems with a 133-MHz front-side bus (providing a 266 MT/s data transfer rate). The "2100" refers to the module's bandwidth (the maximum amount of data it can transfer each second), which is 2.1 GB. PC2100 is used primarily in AMD Athlon systems, Pentium III systems, and Pentium IV systems.
PC2700 memory is DDR designed for use in systems with a 166-MHz front-side bus (providing a 333 MT/s data transfer rate). The "2700" refers to the module's bandwidth (the maximum amount of data it can transfer each second), which is 2.7 GB.
PC3200 (commonly referred to as DDR400) memory is DDR designed for use in systems with a 200-MHz front-side bus (providing a 400 MT/s data transfer rate). The "3200" refers to the module's bandwidth (the maximum amount of data it can transfer each second), which is 3.2 GB.
SDRAM PC100 and PC133 In SDRAM modules, the numbers that come after the "PC" refer to the speed of the system's front side bus.
PC100 memory is SDRAM designed for use in systems with a 100-MHz front-side bus. It is used in many Pentium II, Pentium III, AMD K6-III, AMD Athlon, AMD Duron, and Power Mac G4 systems.
PC133 memory is SDRAM designed for use in systems with a 133-MHz front-side bus. It is used in many Pentium III B, AMD Athlon, and Power Mac G4 systems.
Older memory technology such as PC66 SDRAM, FPM, and EDO PC66 memory is SDRAM designed for use in systems with a 66-MHz front-side bus. It is used in the Pentium 133-MHz systems and Power Macintosh G3 systems.
FPM and EDO speeds are written in nanoseconds (ns), which indicates their access time; the lower the number, the faster the memory (it takes fewer nanoseconds to process data).
It may seem confusing, but faster memory will not necessarily make your system faster. You can't speed up your computer by adding faster memory if other components in your computer (your processor or other memory modules) operate at a slower speed.
Keep in mind, that the right memory for your computer is the kind of memory it was designed to take. Check your system manual or look up your system in the Crucial Memory Advisor to find the memory guaranteed to be 100 percent compatible or your money back!
What is the difference between PC2100 (DDR266), PC2700 (DDR333), and PC3200 (DDR400)?
PC2100 (DDR266) memory, PC2700 (DDR333) memory, and PC3200 (DDR400) memory are all types of Double Data Rate (DDR) SDRAM. The varying numbers refer to the different speeds of memory your computer was designed for.
Let's take a look at PC2100 (DDR266) to break it down simply.
PC2100 refers to the bandwidth of the memory. A PC2100 module has the bandwidth of 2.1GB/sec therefore it is referred to as PC2100.
DDR266 refers to the effective front-side bus speed of your system. While your DDR system or motherboard may operate a 133MHz front-side bus, its effective front-side bus speed is 266MHz because DDR effectively doubles the amount of data transferred per cycle that a non-DDR system would.
The same holds true for PC2700 (DDR333) which has a bandwidth of 2.7GB/sec and is designed for use in systems and motherboards which require a 166MHz front-side bus, with an effective front-side bus speed of 333MHz.
PC3200 DDR (DDR400) has a bandwidth of 3.2GB/sec and is designed for use in systems and motherboards which require a 200MHz front-side bus with an effective front-side bus speed of 400MHz.
Though DDR memory was designed to be backward compatible (meaning you can use PC3200 DDR in a computer designed to use PC2100 DDR or vice-versa), we always recommend that you use the Crucial Memory Selector to find exactly the right memory for your computer.
My computer uses PC2700 (DDR333). Can I use PC3200 (DDR400)?
DDR memory was designed to be backward compatible so generally speaking, you can safely add faster memory to your computer. For example, you can install a PC3200 DDR module in a computer that calls for PC2700 DDR. However, keep in mind that faster memory will not necessarily make your system faster. You can't speed up your computer by adding faster memory if other components in your computer (your processor or other memory modules) operate at a slower speed.
The right memory for your computer is the kind of memory it was designed to take. Check your system manual or look up your system in the Crucial Memory Advisor to find the memory guaranteed to be 100 percent compatible or your money back.
What is dual-channel DDR memory?
The terminology "dual-channel memory" is being misused by some in the memory industry, which can mislead the consumer. The fact is there's no such thing as dual-channel memory. There are, however, dual-channel platforms.
When properly used, the term "dual channel" refers to the DDR or DDR2 chipset on certain motherboards designed with two memory channels instead of one. The two channels handle memory-processing more efficiently by utilizing the theoretical bandwidth of the two modules, thus reducing system latencies, the timing delays that inherently occur with one memory module. For example, one controller reads and writes data while the second controller prepares for the next access, hence, eliminating the reset and setup delays that occur before one memory module can begin the read/write process all over again. Think of it like two relay runners. The first runner runs one leg while the second runner sets up and prepares to receive the baton smoothly and carry on the task at hand without delay. While performance gains from dual-channel chipsets aren't huge, they can increase bandwidth by as much as 10 percent. To those seeking to push the performance envelope, that 10 percent can be very important.
So the next time you come across a product that's touted and sold as dual-channel memory, know this- It's simply two DDR or DDR2 memory modules, packaged and marketed as a specialty product or a must-have "kit." If indeed you have a dual-channel platform and you want to take advantage of the performance gain it offers, our advice is to opt for high quality and service over expensive packaging, and simply purchase your DDR or DDR2 memory in pairs. However, be very careful to order two modules with the exact same specifications; the modules must be identical to each other to perform correctly.
How much RAM can Windows handle?
That depends on two factors: the amount of memory your computer can handle, and the amount of memory your Windows operating system (OS) can handle.
First, your computer is designed to hold a maximum amount of RAM. When you look up your computer in the Memory Selector, you will see the system maximum on the page that lists the compatible upgrades for your system.
Second, the OS maximum, is the maximum amount of memory that your particular version of Windows, Linux, or Mac OS can handle.
When purchasing your memory upgrade, ensure you do not exceed the lower of the two maximums (OS & computer maximums.) Too much RAM can lower your system's performance or cause other problems. (In most cases, the system maximum is lower than the OS maximum.)
Here are the OS maximums for popular versions of Microsoft Windows.
Windows 95: 1GB Windows 98: 1GB Windows 98SE: 1GB Windows ME: 1.5GB Windows NT: 4GB Windows 2000 Professional: 4GB Windows 2000 Advanced Server: 4GB or 8GB with PAE enabled Windows 2000 Datacenter Server: 4GB or 64GB with PAE enabled Windows XP Home: 4GB Windows XP Professional: 4GB
Here are the maximums for some other platforms.
OS X: 8GB due to current hardware limitations OS 9.x: 1.5GB (no single application can utilize more than 1GB) Linux: 64GB
How can I "max out" the memory on my computer?
Once you have found your computer in the Crucial Memory Advisor™ tool, locate the maximum memory capacity and number of memory slots for your particular computer. Generally speaking, you can determine the largest memory module each slot can take by dividing the maximum capacity by the total number of slots. For example, if your computer has a maximum memory capacity of 2048MB (i.e. 2GB), and has two slots, the largest module you can install in each slot is a 1GB memory module.
However, there are exceptions to this general rule. For example, there are situations where a computer with 4 memory slots and a 2GB maximum memory capacity will accept two 1GB modules. There are also times when a system would only accept a 512MB module at launch, but down the road would take a 1GB module with a BIOS upgrade. This is why it is so important to use the Crucial Memory Advisor™ tool which can alert you to these exceptions.
To completely "max out" the memory on your computer, you may need to actually remove memory modules currently installed and replace them with larger-capacity modules. Using the example above, if your computer has one 512MB module already installed, with one memory slot open, you would need to remove the 512MB and install two 1GB modules to truly "max out" your computer.
The System Scanner results showed my computer's front side bus speed is 200MHz but it's supposed to be 800MHz. What's wrong?
Many newer computers use double data rate or "quad-pumped" front side busses. While the actual front side bus frequency is 200MHz, the enhanced capability of your computer's front side bus allows it to perform like a 400MHz or 800MHz front side bus.
I have 256MB of memory, but the Crucial System Scanner only shows 248MB. What happened to the missing memory?
Quite a few computers, especially notebook computers and budget desktop computers, use integrated graphics processors that share memory with the main system memory. As a result, your computer will set aside a portion of the main system memory for the integrated graphics processor, reducing the actual amount of memory available to the rest of your computer. This amount can range from as low as 8MB to as high as 128MB, depending on your computer's specific configuration. While some computers allow you to adjust how much memory the graphics processor uses, there is no way of preventing this from happening, short of installing a graphics card with dedicated memory for the graphics processor.
Rambus® and DDR Explained
DDR SDRAM DDR (double data rate) memory is the next generation SDRAM. Like SDRAM, DDR is synchronous with the system clock. The big difference between DDR and SDRAM memory is that DDR reads data on both the rising and falling edges of the clock signal. SDRAM only carries information on the rising edge of a signal. Basically this allows the DDR module to transfer data twice as fast as SDRAM. For example, instead of a data rate of 133MHz, DDR memory transfers data at 266MHz.
DDR modules, like their SDRAM predecessors, are called DIMMs. They use motherboard system designs similar to those used by SDRAM; however, DDR is not backward compatible with SDRAM-designed motherboards. DDR memory supports both ECC (error correction code, typically used in servers) and non-parity (used on desktops/laptops.)
If your system or motherboard requires DDR, you can purchase the upgrades you need through Crucial's Memory Selector™. For more information on DDR, see the DDR Shopping Guide.
Rambus DRAM Rambus memory (RDRAM®) is a revolutionary step from SDRAM. It's a memory design with changes to the bus structure and how signals are carried. Rambus memory sends less information on the data bus (which is 16 or 18 bits wide as opposed to the standard 64 or 72 bits) but it sends data more frequently. It also reads data on both the rising and falling edges of the clock signal, as DDR does. As a result, Rambus memory is able to achieve effective data transfer speeds of 800MHz and higher.
Another difference with Rambus memory is that all memory slots in the motherboard must be populated. Even if all the memory is contained in a single module, the "unused" sockets must be populated with a PCB, known as a continuity module, to complete the circuit.
Rambus DRAM modules are known as RIMM™ modules (Rambus inline memory modules). Rambus memory supports both ECC and non-ECC applications.
Production Challenges One of the challenges Rambus memory faces is that it is expensive to produce compared to SDRAM and DDR. Rambus memory is proprietary technology of Rambus Inc. Manufacturers that want to produce it are required to pay a royalty to Rambus Inc., whereas DDR designs are open architecture. Other cost factors for Rambus memory include additional module manufacturing and testing processes and a larger die size. Rambus die (chips) are much larger than SDRAM or DDR die. That means fewer parts can be produced on a wafer.
Performance Now for the million-dollar question: How do DDR and Rambus memory compare performance wise? Sorry, I know you don't want to hear this — that depends. Both technologies have their own ardent supporters and we have seen several different benchmarks to date that provide conflicting results.
On the surface, it seems simple: Data flow at 800MHz is faster than data flow at 266MHz, right? Unfortunately, it isn't that simple. While Rambus modules may have the ability to transfer data faster, it appears to have higher latency (the amount of time you have to wait until data flows) than that of a DDR system. In other words, the first data item transferred in a Rambus transaction takes longer to initiate than the first data item moved in a DDR system. This is due in part to how the systems are constructed.
In a DDR or SDRAM system, each DIMM is connected, individually and in parallel, to the data bus. So whether you have a single DIMM or multiple DIMMs, the amount of time it takes to initiate a data transfer is effectively unchanged.
In a Rambus system, RIMM modules are connected to the bus in a series. The first data item transferred must pass through each RIMM module before it reaches the bus. This makes for a much longer distance for the signal to travel. The result is higher latency. That's not necessarily a problem in an environment where data transactions involve lengthy streams of data, such as gaming. But it can become an issue in environments where many small transactions are initiated regularly, such as a server.
To further explain, here's an example that we can all relate to — driving your car to the store. You can take the roundabout freeway and drive 20 miles at 70 MPH. Or, you can take a more direct route and drive just 5 miles at 50 MPH. You might go faster on the freeway but you'll get to the store (Memory Controller) faster on the straight-line route.
Bottom Line Generally speaking, motherboards are built to support one type of memory. You cannot mix and match more than one type of SDRAM, DDR, or Rambus memory on the same motherboard in any system. They will not function and will not even fit in the same sockets. The right type of memory to use is the one that your motherboard takes! And no matter what type of memory you use, more is typically better. A memory upgrade is still one of the most cost-effective ways to improve system performance.
Micron manufactures memory - Here's how!
Memory chips are integrated circuits with various components (transistors, resistors, and capacitors) formed on the same chip. These integrated circuits begin as silicon, which is basically extracted from sand. Turning silicon into memory chips is an exacting, meticulous procedure involving engineers, metallurgists, chemists and physicists.
Memory is produced in a very large facility called a fab, which contains many cleanroom environments. Semiconductor memory chips are manufactured in cleanroom environments because the circuitry is so small that even tiny bits of dust can damage it. Micron's Boise facility covers over 1.8 million square feet and has class 1 and class 10 cleanrooms. In a class 1 cleanroom, there is no more than 1 particle of dust in a cubic foot of air. In comparison, a clean, modern hospital has about 10,000 dust particles per cubic foot of air. The air inside a cleanroom is filtered and recirculated continuously, and employees wear special clothing such as dust-free gowns, caps, and masks to help keep the air particle-free. This special clothing is commonly referred to as a bunny suit.
The first step from silicon to integrated circuit is the creation of a pure, single-crystal cylinder, or ingot, of silicon six to eight inches in diameter. These cylinders are sliced into thin, highly polished wafers less than one-fortieth of an inch thick. Micron uses six- and twelve-inch wafers in its fabrication processes. The circuit elements (transistors, resistors, and capacitors) are built in layers onto the silicon wafer.
Most chip designs are developed with the help of computer systems or computer-aided design (CAD) systems. Circuits are developed, tested by simulation, and perfected on computer systems before they are actually built. When the design is complete, glass photomasks are made�one mask for each layer of the circuit. These glass photomasks are used in a process called photolithography.
In the sterile cleanroom environment, the wafers are exposed to a multiple-step photolithography process that is repeated once for each mask required by the circuit. Each mask defines different parts of a transistor, capacitor, resistor, or connector composing the complete integrated circuit and defines the circuitry pattern for each layer on which the device is fabricated.
Memory Chip Manufacturing Part 2
At the beginning of the production process, the bare silicon wafer is covered with a thin glass layer followed by a nitride layer. The glass layer is formed by exposing the silicon wafer to oxygen at temperatures of 900 degrees C or higher for an hour or more, depending on how thick a layer is required. Glass (silicon dioxide) is formed in the silicon material by exposing it to oxygen. At high temperatures, this chemical reaction (called oxidation) occurs at a much faster rate.
Next, the wafer is uniformly coated with a thick light-sensitive liquid called photoresist. Portions of the wafer are selected for exposure by carefully aligning a mask between an ultraviolet light source and the wafer. In the transparent areas of the mask, light passes through and exposes the photoresist.
Photoresist undergoes a chemical change when exposed to ultraviolet light. This chemical change allows the subsequent developer solution to remove the exposed photoresist while leaving the unexposed photoresist on the wafer. Wafers are exposed to a multiple-step photolithography process that is repeated once for each mask required by the circuit.
The wafer is subjected to an etch process (either wet acid or plasma dry gas etch) to remove that portion of the nitride layer that is not protected by the hardened photoresist. This leaves a nitride pattern on the wafer in the exact design of the mask. Hundreds of memory chips can be etched onto each wafer. The hardened photoresist is then removed (cleaned) with another chemical.
Dopants are frequently introduced as part of the layer formation in high temperature diffusion operations or with ion implanters. These dopants tailor the silicon's conductive characteristics making it either negative (n-type) or positive (p-type). These basic steps are repeated for additional layers of polysilicon, glass, and aluminum.
The finished wafer is an intricate sandwich of n-type and p-type silicon and insulating layers of glass and silicon nitride.
Memory Chip Manufacturing Part 3
All of the circuit elements (transistor, resistor, and capacitor) are constructed during the first few mask operations. The next masking steps connect these circuit elements together.
An insulating layer of glass (called BPSG) is deposited and a contact mask is used to define the contact points or windows of each of the circuit elements. After the contact windows are etched, the entire wafer is covered with a thin layer of aluminum in a sputtering chamber.
The metal mask is used to define the aluminum layer leaving a fine network of thin metal connections or wires.
The entire wafer is then covered with an insulating layer of glass and silicon nitride to protect it from contamination during assembly. This protective coating is called the passivation layer. The final mask and passivation etch removes the passivation material from the terminals, called bonding pads. The bonding pads are used to electrically connect the die to the metal pins of the plastic or ceramic package.
Every integrated circuit is tested. Functional and nonfunctional chips are identified and mapped into a computer data file. A diamond saw then cuts the wafer into individual chips. Nonfunctional chips are discarded and the rest are sent on to be assembled into plastic packages. These individual chips are referred to as die.
Before the die are encapsulated, they are mounted on to lead frames, and thin gold wires connect the bonding pads on the chip to the frames to create the electrical path between the die and lead fingers.
Product samples are taken out of the normal product flow for environmental and reliability assurance testing. These quality assurance tests push chips to their extreme limits of performance to ensure high-quality, reliable die and to assist engineering with product and process improvements.
During Encapsulation, lead frames are placed onto mold plates and heated. Molten plastic material is pressed around each die to form its individual package. The mold is opened, and the lead frames are pressed out and cleaned.
Electroplating is the next process where the encapsulated lead frames are "charged" while submerged in a tin/lead solution. The tin/lead ions are attracted to the electrically charged leads to create a uniform plated deposit which increases the conductivity and provides a clean consistent surface for surface mount applications.
In Trim & Form, lead frames are loaded into trim-and-form machines where the leads are formed step by step until finally the chips are severed from the frames. Individual chips are then put into antistatic tubes for handling and transportation to the test area for final testing.
Each memory chip is tested at various stages in the manufacturing process to see how fast it can store or retrieve information, including the high temperature burn-in in Micron's proprietary AMBYX® ovens which test the circuitry of each chip, ensuring the quality and reliability. This monitored burn-in provides feedback throughout the process, allowing identification and correction of manufacturing problems.
The completed packages are inspected, sealed, and marked with a special ink to indicate product type, date, package code, and speed.
Memory Module Manufacturing Part 4
Once memory chips are made, we still need a way to connect them to your computer. To do this, the chips are mounted to printed circuit boards (PCBs). The final assembled product is called a memory module.
Micron engineers design memory modules using Computer Aided Design (CAD) programs. Module sizes will vary depending on the chip's configuration (SIMM, DIMM, Memory type, etc.). Chip configuration also determines the electrical characteristics of the PCB.
The PCB is a critical part of the memory module. It enables your computer to access the memory. For this reason, Micron engineers place significant effort on correctly designing the PCB. Each design is tested by simulation and undergoes multiple design improvements prior to release for production.
PCBs are built in arrays, or sheets, made up of several identical boards. After assembly, the array will be separated into individual modules, similar to how a chocolate bar can be broken into small squares. By varying the total number of PCBs in each array based on size, Micron maximizes the number of modules made from a given amount of raw materials. The Micron Design Engineering group also interacts frequently with system manufacturers' engineers to optimize the design process and improve manufacturability of the customer's modules.
When the module design is perfected and the PCBs produced, memory module assembly begins! Assembly entails an intricate soldering procedure that attaches memory chips to the PCB.
Throughout the entire module assembly process, Micron takes great precaution to eliminate electrostatic discharge (ESD), or what most of us refer to as static electricity. ESD damage is a leading cause of device failure. That same "shock" you feel after shuffling your feet across carpet then touching something can completely destroy a memory chip. In fact, a person passing within 12 inches of an unprotected chip can cause damage. Micron team members wear protective clothing and use anti-static equipment during the assembly process. This ensures that any electrical charges on people or equipment will not transfer to the memory modules. Additionally, after every manufacturing step, the product is checked and verified and in-line Statistical Process Control (SPC) data is gathered. These checks provide immediate feedback to ensure continuous improvement.
Memory Module Manufacturing Part 5
The first step in assembling the memory module is Screen Print. A stencil is used to screen solder paste onto PCBs. The stencil ensures the solder paste affixes only where components will attach. Solder paste is tacky and holds chips in place on the PCB. If a chip is misplaced, it can be removed and the solder paste is cleaned off the board.
PCBs contain several marks called fiducials. These are not part of the circuit but are locators for placing chips. Vision systems in High Speed Automated Pick and Place machines scan the fiducials to dimensionally check and locate where to place chips on PCBs. Pick and Place machines are programmed to know which chips are placed where. The machine picks a chip from the feeder and places it in its appropriate location on the board. The same process occurs for all remaining chips and for any other components on the module. Of all the steps your memory module goes through, this is the fastest. All the chips are placed on a PCB in just a few seconds!
Next, the assembled chips and boards pass through an oven. The heat melts, or reflows, the solder into a liquid stage. When the solder cools, it solidifies, leaving a permanent bond between chips and board. The surface tension of the molten solder prevents the chips from misaligning during this process.
Once the chips are attached, the array is separated into individual modules. Micron team members visually inspect each module. Many modules also undergo additional inspections using automated X-Ray equipment to ensure no unreliable solder joints exist. All Micron Memory Modules meet IPC-A-610 acceptance criteria - the industry standard recognized worldwide.
Micron then tests and tags the modules. We use custom equipment to automatically test performance and functionality. This eliminates any possibility of an operator mistakenly placing a failed module in a passing location. Certain modules are programmed with an identifying "Dog Tag" that your PC will recognize and read.
Finally, the modules are sampled through one last Quality Inspection. They are placed into ESD safe plastic trays or bags and are ready for delivery. Finished product is shipped to the customer. Micron is the memory supplier to the top PC manufacturers in the world.
Final Product Who else manufacturers memory? There are only a handful of "true" manufacturers of memory - that is, companies who fabricate the memory chips. These manufacturers sell their chips mostly to major computer manufacturers for use in their systems. In the memory upgrade market, however, there are a number of vendors who claim to be memory manufacturers, but the truth is, these vendors buy the memory chips from a manufacturer like Micron, then, merely assemble the modules. Other vendors in the upgrade market simply buy the modules from a manufacturer, repackage them, and sell them under their brand name.
The information below was gladly sponsored by European tax payers. The money went to a Dutch company for doing social research among developers. People doing this kind of research deserve a financial compensation, of course.
The start point of the research was a questionnaire for developers, asking about demographical information, orientation, motivation, earning and employment.
The typical nerd
The survey started with a description of the typical geek or nerd that can be expected in a developer environment:
The subject is male.
If not home-sticking, then at least computer-sticking.
Only interested in computers.
While some will say otherwise, it is generally believed they earn relatively high incomes.
The friends of a nerd are other nerds.
Nerds generally know their friends only by E-mail or IRC since they never go out.
The working day of a nerd usually starts when yours end. They live on cafeine, nicotine and other -ines.
They are single. Since they never go out.
A large part are still students.
Or, as the dictionary puts it: geek: a carnival performer often billed as a wild man whose act usually includes biting the head off a live chicken or snake, which translates as "not the sociable kind".
Yes, it's true!
The actual survey was about checking if the above is all true. Most of it is.
And so much for the better. Geeks are typically at the perfect age for reproduction and fun. Only one fifth is student, but all of them are 98% male. But don't we all love a typical man with just a slight touch of the female nature? Macho's with a soft spot?
Another advantage is that only one fifth of the geeks are married. That's probably not the same fifth that are students, but that still makes over half of them possible dates. Except if you love the risk of going for young boys or married men, in that case you still have the full choice.
My general impression is that women like to have a partner they can respect, somebody intelligent, and that women will give up an attractive partner for an intelligent one. Now, your average nerd is not very attractive, but they do have the brains, all consolidated in PhD's, university or highschool degrees and various certificates. So I'm quite sure that geeks or nerds are OK, guru's on the other hand are not good candidates in my opinion. It seems that you have to really spend an awful lot of time behind your computer in order to achieve guru status, while you can be a nerd by more or less keeping up appearances. This can be done, specifically in the case of Open Source activists, by spending a couple of hours a week on your projects. Most people really don't do more than that, so there's plenty of time for building relationships.
Also on the financial plan you're quite safe with a nerd: unemployed nerds virtually don't exist. Except the students, but if you go for the young boys option, you should be prepared to grab your wallet from time to time. It's a well-known fact they don't have anything at all. The average geek on the other hand makes a fair and reliable monthly income, enough to sustain wife and children and a nice house in a nice neighbourhood.
Free/Open Source developers vs. closed source developers
The above statements should of course be revised if we are to compare Open Source developers with closed source developers. There's an equal amount of both, but I would advise to try an Open Source developer if you have the choice. Open Source developers are generally more satisfied with their work, so consequently they will be more agreeable human beings. As an extra, they suffer less from time pressure, so they have less stress, and a longer life. About that last statement we don't know anything yet, because geeks are commonly too young a group for producing this kind of statistical material, but it is my guess this will become clear in a couple of decades.
Since Open Source developers clearly seem to enjoy what they're doing, they don't usually think that money is all-important. They usually have a healthy attitude towards balancing earning and spending money. Expect your presents in hardware, of course. But hey, if he buys your laptop, all the more for you to spend on make-up, new clothes or whatever you like!
Other nice side effect of Open Source is the motivation and energy these developers have. And not only about their software. They will also be very enthousiastic about you, guaranteed! And they want to make things better - almost sounds like Philips, what more can you want? Plus they are innovative and funny and generally not the kind of guy that wants to stay in his ivory tower.
To a certain (large) class of Open Source developers, programming is an art. They have an eye for aesthetics. They will tell you when you look too fat in your dress, but when asked to observe, they will notice you changed your hair colour.
Open Source people are also the kind of guys that usually have elaborate ideas about politics and freedom, and are generally peace-loving. Not the kind of husband that beats up his wife - maybe because they usually don't have the physics for it to start with, but even if they would, you're safe.
You might not have thought of it, but your true Jacob might be a geek...
More information can be found at infonomics.nl. The survey was published in the Autumn of 2002 and apparantly was also referenced in SlashDot.
Last evening, I received a singular lesson in Indo-American relations. And why the two countries are coming closer despite minor irritants like Pakistan and Iraq. The lesson was delivered by a young American who wished to remain unnamed. The setting was one of those boring parties that features businessmen, journalists and members of the diplomatic corps of various countries. The immediate provocation that sparked off this most interesting conversation was my stray remark -- naive as I see it now -- that America, being the most owerful country in the world today, did not need anyone's help to do whatever it wished. Or, at least, that Americans needed help from other countries far less than the latter did from them.
I was gently contradicted by this your American gentleman. "Oh, but you are so wrong," he said."Sure, we Americans do not need much help from other countries -- but we just cannot do without the help of India."
I must confess that left me completely stumped. "Why does the US need India? Is is because we are both democracies and we both want to fight terrorism?" I asked. "Of course not. There are far more important reasons than that," he remarked. "It is because of your sheer numbers. Why, even George Bush remarked to former ambassador Blackwill once that what he liked about India was that it had a 'billion people..... Isn't that something?' "
"So that is the reason -- India is a big market for the Americans, is that it?" I asked. "Oh, you are not really an attractive market. There are dozens of other markets which have better potential," he said.
By now I was getting thoroughly puzzled. So I waited for him to explain further.
"You see, we need you at the time of birth," he said. "We are facing a terrible shortage of nurses back home -- we need almost 500,000 of them in a hurry and fewer and fewer people are joining the nursing courses in the US. It is a terrible job -- low wages, bad working hours, not many prospects of going up in life. And, therefore, we need nurses from India.
"And then we need teachers," he continued. "Now you Indians produce vast hordes of graduates and post-graduates who have no jobs. They jump at the thought of becoming teachers in our public schools. Now not too many people in America want to become teachers in these schools, what with the shootings and the bad behaviour of our kids. So we are looking at Indians to fill up the positions in at least the worst public schools in our country."
"Then there is research on all the subjects -- genetics, software, telecommunication, pharmaceuticals -- where we Americans want to maintain our lead in the world. But because of our primary education system, we just aren't producing enough brilliant people. But your IITs churn out brains by the thousands. And we need them to do our research for us."
By now he was warming up to the subject. And I was more than happy to listen to his logic. "We will also shortly need you for our armed forces. As Iraq and Afghanistan showed us, even with our technological superiority in warfare, we still end up losing some American lives in battle," he said.
"But India doesn't want to fight American wars. I thought our prime minister had made that amply clear when the US asked for troops for Iraq," I argued.
"Oh, India can stay out if it wishes. But suppose we give out visas and green cards to people who want to join the American armed forces, I am sure we will get millions of Indians applying overnight. In fact, I have already proposed the idea to some higher ups," he said somewhat smugly.
"And then we need you people to man all those irate calls that we get from people who have bought American products. Why do you think so many of these call centre jobs are being shifted to India these days?" he said.
"I thought that was because of the cost differential," I pointed out.
"That is only a small part. The real reason was that we were simply too sick of fielding all those calls from cranky customers who think that just because they have bought a product, it gives them the right to also expect service," he said.
I must confess I had never thought of that aspect.
"And then we will need more doctors and nurses from India to look after our ageing baby boomers. We will need Indians to run our retirement homes. And we will need them to take care of hazardous things like flights to space," he said.
"So what will the Americans do?" I asked.
"Oh, we will continue to do the really important thing --- RUN THE WORLD."
The following talk is all about Debian for which I was made to speak at the ILUGD meet held on the 18th of April 2004. Most of the references in this talk has been taken from Manoj Srivastava's (Lead Debian Developer) talk.
Debian -- Philosophy, Merits and Key Features
Philosophy is the most durable differentiating criterion between the operating systems we are considering. Performance numbers change. Ease of use, reliability, availability of software -- all these characteristics change over time, and you have to go out and re-evaluate them over time.
But philosophy doesn’t change.
I must confess that philosophy and community is what lead me to Debian; and I think these are still the most important criteria, and are often underrated.
Why is free software a good thing ?
The popular answer seems to be:-
because it is cool,
because it is zero cost.
because it gives you a geekish image.
The motivations of the authors also are varied, but the coin that they get paid in is often recognition, acclaim in the peer group, or experience that can be traded in in the work place.
But all this is missing the critical motto of why Free Software was designed. I’d like to give an example to the manner in which academic research is conducted. If researchers were doomed to reinvent the wheel, handle, brakes, axle only; then everything beyond that and other innovations (may be the motorbike) progress in the research community would be stunted. People start in research by doing literature searches, looking for interesting investigations, and perhaps correlating unrelated papers, building on the ideas and techniques of other researchers in the field. The secrecy shrouding research in most labs exists only till the moment of publication -- and then people share their techniques, and ideas, and results -- indeed, reproducibility is a major criteria of success. Contrast this with proprietary software, where mostly all begins again -- from scratch.
People could soar and grow if only we could freely share and build upon the ideas and labours of others. This would lower the time, effort, and cost of innovation, allow for best practices and design patterns to develop and mature, and reduce the grunt programming that raises the barrier to developing solutions in house.
==== We just have to ensure that the incentive for achievement still exists (and it need not be purely a profit motive). ====
This belief leads us to choose the GPL, and free software foundation view of things, as opposed to the BSD licence, which are also free software licenses, and at the end lead eventually to choosing Debian. In my personal opinion, the BSD license has been more about personal pride in writing free software, with no care as to where the software went;
Debian is an exercise in community barn building; together, we can achieve far more than we could on our own. The Debian social contract is an important factor in my choice of Debian, with its blend of commitment to free software.
What leads an average computer user to a good OS ?
Ease of use
Availability of software (Software Packages)
Utility and Usability
Utility, of course, depends on what our goals/requirements are.
There is more to an operating system than a kernel with a hodge-podge of software thrown on the top -- systems integration is a topic usually given short shift when discussing the merits of a system. But a well-integrated system -- where each piece dovetails with and accommodates other parts of the system -- has greatly increased utility over the alternative.Debian, in my experience, and the experience of a number of other users, is the best integrated OS out there. Debian packages trace their relationships to each other not merely through a flat dependency/conflicts mechanism, but a richer set of nuanced relationships –
Apart from this, packages are categorized according to priority(Essential through extra), and their function. This richness of the relationships, of which the packaging system is aware and pays attention to, indicates the level at which packages fit in with each other. Debian is developed by about 1000 volunteers (Most of which are SysAdmins). That means that every developer is free to maintain programs he is interested in or he needs for his special tasks in real life. That's why Debian is able to cover different fields of specializations -- its developers just want to solve their own special problems. This broad focus is different from commercial distributions which just try to cover mainstream tasks.
It is said that Debian machines at work:-
Take less hand holding,
Are easier to update,
And just plain don't break as often as the Red Hat and Mandrake boxes.
One of the reasons for selecting Debian over other distributions is its sheer size of the project which strongly suggests that Debian won't suddenly disappear and one is suddenly left without any support. Debian can't go bankrupt. Its social contract doesn't allow the project to abruptly decide not to support non enterprise versions of the distribution. I do not want my OS to be held hostage to anyones business plans!
You can fine-tune the degree of risk you want to take, since Debian has three separate releases:
Stable, -- Woody
Testing, -- Sarge and
Unstable. – Sid
On some of the machines people run `stable'. Some of the other systems (individual work-stations) run various combinations of testing/unstable. (Note that there are no security updates for testing). What's great is the ability to make finely graded decisions for different machines serving different functions. But even the more adventurous choices are solid enough that they virtually never break. And `stable' just never breaks ;-).
Large number of Supported Architectures.
Supported architectures are:
Intel x86/ IA-32 (i386)
Motorola 68K (m68k)
Sun SPARC (sparc)
Motorola/IBM Power PC (powerpc)
MIPS CPUs (mips and mipsel)
HP PA-RISC (hppa)
Debian GNU/Hurd (i386)
Debian GNU/NetBSD (netbsd-i386 and netbsd-aplha)
Debian GNU/FreeBSD (freebsd-i386)
Debian provides a great deal of feedback upstream. For example, the XFree86 project does not itself maintain or debug X on all the architecture Debian supports -- it relies on Debian for that. This attention to detail is hard for any other Linux distribution to match.
Is it just apt-get ?
People often say how they came to Debian because of apt-get, or that apt is the killer app for Debian. But apt-get is not what makes the experience so great: apt-get is a feature readily reproduced (and, in my opinion, never equalled), by other distributions -- call it urpmi, apt4rpm, yum, or what have you. The differentiating factor is Debian policy, and the stringent package format QA process (look at things like apt-listchanges, apt-list-bugs, dpkg-builddeps, pbuilder, pbuilder-uml -- none of which could be implemented so readily lacking a policy (imagine listchangelog without a robust changelog format)). It is really really easy to install software on a Debian box.
So the killer app is really Debian policy, the security team, the formal bug priority mechanisms, and the policy about bugs (namely: any binary without a man page is an automatic bug report. Any interaction with the user not using debconf is a bug).
A small reading from the Wiki Page of “Why Debian Rocks”:
This is the crux, the narthex, the throbbing heart of Debian and what makes it so utterly superior to all other operating systems. Policy is defined. It is clear. It is enforced through the tools you use every day. When you issue apt-get install foo, you're not just installing software. You're enforcing policy - and that policy's objective is to give you the best possible system. What Policy defines are the bounds of Debian, not your own actions on the system. Policy states what parts of the system the package management system can change, and what it can't, how to handle configuration files, etc. By limiting the scope of the distribution in this way, it's possible for the system administrator to make modifications outside the area without fear that Debian packages will affect these changes. In essence, Policy introduces a new class of bugs, policy bugs. Policy bugs are release-critical -- a package which violates policy will not be included in the official stable Debian release.
The evaluation process each package has to undergo in the unstable distribution before it makes it into testing adds to the quality of the finished product. Once a package has not shown any important problem for a certain time(14 days) period it goes into the testing distribution. This distribution is the release candidate for the future stable distribution which is released only when all release critical bugs are resolved. This careful testing process is the reason why Debian has a longer release cycle than other distributions. However, in terms of stability this is an advantage. (Note: RH Enterprise Linux is apparently shooting for 12 - 24 month release cycles. Closer to what Debian's historically had.)
The fact that Debian supports as many architectures as it does also feeds into the quality of packages: Porting software often uncovers flaws in the underlying code. Add to the fact that all software in Debian goes though 10 or so automatic build daemons, and needs be bug free when building on these different environments, requires that the build and install scripts be very robust, and requires a very strict tracking of build time dependencies. Add source archive mirrors and version tracking, and you have a fairly robust system (snapshot.debian.net provides for easy rollbacks) .The Debian bug tracking system is a key to the quality of the distribution. Since releases are linked to the numbers of release critical bugs in the system, it ensures that the quality of the release is better than any proprietary UNIX. The Release Manager is fairly ruthless about throwing out any non essential package with RC bugs if they do not get fixed -- or delaying the release if it is a critical package with the bug.Compared to commercial Linux distributions, Debian has far higher developer to package ratios. Added to the lack of business cycle driven deadlines, Debian tends to do things right, rather than do things to get a new version out in time for Christmas.
Features Set and Selection of Packages
Debian has over 10000 packages now(13000 + in SID). The chances are that anything you need is already packaged and integrated into the system, with a person dedicated to keeping it (and a small number of other packages) upto date, integrated, and bug free.
Debian has a huge internationalization effort, translating not only the documentation but also the configuration and install scripts (all debconf interaction can be fully internationalized). It helps to have a massively geographically distributed community -- there are native speakers in tonnes of languages.The internationalization effort in Debian matches that for Gnome and KDE.
Other notables, to pay a little attention to, are:
The Debian documentation project,
The package tracking system.
Some other things which will keep me using Debian until they're supported by something else:
debconf and the ability to prepopulate the database
make-kpkg with all the install-time prompts turned off
The BSD kernels, from all accounts, seem to be stabler, and of better quality than Linux kernels seem to be. On the flip side, Linux kernels more feature rich, and the quality has improved significantly, seem to perform much better, and better hardware support than the BSD kernels do. Indeed, I've heard comments that when it comes to driver support, the BSD's are where Linux was 5 years ago. Personally, the supposed added bugginess of the Linux kernels have not exceeded my threshold of acceptability. And, overall, I don't think that a Debian box feels any less robust and stable than, say, a FreeBSD box. Of course, the recent spate of holes in Linux kernels are beginning to strain that. (However, we should keep in mind that having more features is a contributry factor: the two latest holes were in the mremap(2) call that is not available for any of the *BSD.)
Upgrades have been said to be the killer advantage for Debian. More than most other OS's, the network is the distribution and upgrade mechanism for Debian. Policy, the thought that has gone into the maintainer scripts, and the ways in which they can be called, the full topographical sorting over the dependency web done by apt and friends, all work together to ensure that upgrades in place work smoothly. Reinstalls are not unheard of in an recommended BSD upgrade path (Since 2.8 or 2.9, OpenBSD said at least two times to i386 users "upgrade not supported / not recommended, do a fresh install").
This ease of upgrades also plays into security of the system; security upgrades are far more convenient on Debian than they are on other systems, thanks to the Security team. For us mere mortals not on vendor-sec, having security.debian.org in our sources list ensures that our boxes get updated conveniently, and quickly, after any exploit is made public -- since the security team was already working on a fix before the details went public. This means that systems get updated in minutes, whereas the recommended way to do an upgrade on a BSD OS involves recompiling the entire system (at least, the "world").
Debian attempts to ensure smooth upgrades skipping a major release - which is not something that I have seen supported elsewhere. I keep coming back to quality of packaging. Even downgrades are possible. Experience and talks show that Debian can be downgraded to a previous release too. But isn't recommended/Encouraged anyway.
Administering Debian is the primary reason most people stay with it. I know no other distribution where you can type in apt-get install sendmail, and walk away with a fully functional mail server, complete with SASL and TLS, fully configured, complete with certificates. All administration can be done over SSH given only dialup speeds.
The Debian guarantee that user changes to configuration files shall be preserved, and that all configuration files shall live in /etc (as opposed to being all over the file system) makes for easier backups.Debian is compliant with the FHS, and LSB compliance is a release goal. The distributed nature of Debian development and distribution makes it really easy to set up a separate repository of custom packages that can then be distributed in house; and the policy and build mechanisms ensure that third parties can build the system just as easily in a reproducible fashion.
Portability and Hardware Support.
Linux tends to support more of the esoteric hardware than BSD does. Whether that is a problem, depends on your needs. Support for the high quality hardware is mostly the same. IBM's assurance of Linux support on all their hardware, and that of HP, is also an advantage for Linux. Multiple journaling file systems that have come into the Linux kernel recently are also a vital addon. For desktop, the killer factor is drivers. And Linux leaves all the other X86 Unixes behind by a mile. When it comes to portability, NetBSD is supposed to be the byword. I googled to find out, what is suported by NetBSD, and Debian: I found that debian supports ibm s/390 (IBM) and ia64, while NetBSD has support for sun2 (m68010), PC532 (whatever that is), and VAX. Note that what NetBSD call architectures are often labelled sub-architectures by Debian, and thus do not count in the 11 supported architecture count.
There are a lot of things told about the ports mechanism of BSD, and the portage systems of gentoo. I have also heard about how people have problems actually getting things to compile in the ports system. Apart from the fact that compiling everything rapidly gets old.
It is not as if you can't do a port like auto build of Debian -- there are auto-builders on 11 architectures that do that, continuously, every single day -- the question is why would one want to? I have yet to see a single, replicable test demonstrating any palpable performance improvement by local, tailored optimized compilations -- and certainly none that justifies, in my eyes, the time spent tweaking and build the software all over.
Someone said that when they were younger and felt like playing a prank they would adjust some meaningless parameters on someone's computer and tell them "this will make it run about 5% faster, but you probably won't notice it". With such a challenge they usually responded by becoming totally convinced that their machines had been improved considerably and that they could feel the 5% difference!
Conventional wisdom seems to indicate overall system performance increases are less than 1%. Specific programs can benefit greatly, though, and you can always tweak a critical app for your environment in Debian. Whatever time is saved by running an optimized system is more than compensated for by the time spent building the system, and building upgrades of the system (I've heard of people running doing their daily update in the background while doing other things in the foreground.)
Not to mention how integration suffers by not having a central location where interoperability of the pieces can be ever tested well, since every system would differ wildly from the reference.
A source build system is also far more problematic when it comes to major upgrades -- There are anecdotal evidence of it not being as safe and sane as the Debian upgrade mechanisms.
Anyway, if we do want to build packages from source on Debian, we can use:
apt-get source –b packagename,
apt-src, or any of a number of tools.
The real point here is that Gentoo is a distro for hobbyists and hard-core linux users, who can spare the time building their apps. I know Gentoo also provides pre compiled binaries -- but does that not defeat their supposed advantage? For an enterprise environment where downtime does cost money this is simply inadmissible and Debian provides the best solution. Those of you who administer more than a handful machines can really appreciate how convenient it is to be able to issue apt-get update && apt-get upgrade at once instead of having to go downloading, configuring, compiling and installing software machine per machine, without any sort of automated help ( I am not completely doing justice to emerge / portage here, but the point is clear, I hope ). I can emphasize this enough: for "serious"/production usage, binary distros are the best and only viable solution; Amongst them, Debian ( not only because of APT but also because of all the hard work done by Debian Developers to ensure correctness of the packaging ) is the best [I have tried SuSE, RedHat and Mandrake, and I wouldn't prefer going back ]
Security And Reliability
There is always a trade off between security and convenience -- the ultimately secure computer is one that is never turned on. Secure, but not very useful. You have to decide where your comfort zone lies.
What does one think of when one says Security and Unix like OS? OpenBSD, with some justification. It is audited and has the small size, small system requirments AND the pure text based install. If you stick to the core install, you get an audited system, with no services turned on by default and an assurance that there are no holes in the default install that can lead to a remote root compromise. However, you tend to end up with old software, and the default install really does very little.Most people agree that the secure and audited portion of OpenBSD does not provide all the software they require. Also, OpenBSD's performance numbers are, umm, poor, compared to SELinux on a 2.6.3 kernel.
OpenBSD's secure reputation is justified - but only when you know the project, when you are familiar with what does it really cover. OpenBSD may be a great firewall, maybe even mail or static Web server - As long as you keep out of the ports tree, you do have an audited, security-conscious system. The OpenBSD userland ports break more often than stable Debian -- but, in OpenBSD, ports are officialy not part of the system, and should a security problem appear in one of them, you are on your own.
The Debian GNU/Linux distribution has a strong focus on security and stability. We have an Security team, automated build systems to help the security team quickly build versions across all the architectures that are supported, and policy geared towards those goals. Debian handles binary package distribution much better. One can have his own aptable archive and feed all productive servers from it, using Debian's native apt mechanisms.Even without SELinux, I find the rock solid stability of Debian stable, with the peace of mind that comes from back ported security fixes provided by the Security team, very persuasive. It is easy for an untrained recipient to keep up to date with security; and reduces the likelihood of compromise. This is very important in a commercial environment with a large number of computers, where is it important that the software NOT be upgraded every few months.
Latest Development In Debian
Most of the complains that I've heard about Debian, are from the newbies complaing about it's installer. The hurdles that most of the people feel is, Installing Debian. The blue screened, console based installer seems ultra technical and ugly to them. The installer could be an issue upto some extent, but I think, SPECIFICALLY to newbies. Experience users often find the installer quite easy and simple to use. It's just the trend of using a GUI based fancy installer that has landed up into the mind of the people resembling Debian as an ultra technical GNU/Linux distribution. The next - generation Debian Installer, scheduled to ship with Debian Sarge promises to fulfill many of the problems for newbies. Also the anaconda installer from Red Hat has been ported to Debian and can be found at Progeny.
There is no other OS or distribution that I know of which has just this mix of properties (ease of maintenance, affordability, stability, size, customizability, strong support). For the most part, I do not want to tinker with and Debug my workstation, I want to get my job done, easily, safely, and with minimal concern about the infrastructure I use. Debian helps me accomplish that.And that's still the primary reason I use it today, from a technical standpoint. Software installation and upgrade. The packages are top-notch, they as a rule install and upgrade perfectly. Software maintenance is still a really large part of any sysadmin's job, and with Debian it's simply trivial. It's a non-issue. Don't even bring it up when talking about any problems with Debian, it's not worth the effort.