Monday, June 25, 2007

 

Tuning the Linux kernel for better network Throughput

By Vincent Danen, Special to ZDNet Asia
18 June 2007

The Linux kernel and the distributions that package it typically provide very conservative defaults to certain network settings that affect networking parameters. These settings can be tuned via the /proc filesystem or using the sysctl program. The latter is often better, as it reads the contents of /etc/sysctl.conf, which allows you to keep settings across reboots.

The following is a snippet from /etc/sysctl.conf that may improve network performance:

net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_syncookies = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

The above isn't to replace what may already exist in /etc/sysctl.conf, but rather to supplement it. The first command enables TCP window scaling, which allows clients to download data at a higher rate by enabling extra bits in TCP packets that are used to increase the window size.

The second command enables TCP SYN cookies, which is often enabled by default and is extremely effective in preventing conditions such as SYN floods that can drain the server of resources used to process incoming connections.

The last four options increase the TCP send and receive buffers, which allow an application to move its data out faster so as to serve other requests. This also improves the client's ability to send data to the server when it gets busy.

By adding these commands to the /etc/sysctl.conf file, you ensure they take effect on every reboot. To enable them immediately without a reboot, use:

# sysctl -p /etc/sysctl.conf

To see all of the currently configured sysctl options, use:

# sysctl -a

This will list all of the configuration keys and their current values. The sysctl.conf file allows you to configure and save new defaults; what you see from this output are the defaults defined in the kernel that are currently effective. To see the value of one particular item, use:

# sysctl -q net.ipv4.tcp_window_scaling

Likewise, to set the value of one item without configuring it in sysctl.conf -- and understanding that it won't be retained across reboots, use:

# sysctl -w net.ipv4.tcp_window_scaling=1

This can be useful for testing the effectiveness of certain settings without committing them to being defaults.

Labels: ,


 

Tuning the Network File System for better performance

Tuning the Network File System for better performance
By Vincent Danen, TechRepublic
The Network File System (NFS) is still very popular on Linux systems, but it can use some help to increase performance by tweaking the relatively conservative defaults that most Linux distributions ship with. This can be done by tweaking both NFS servers and clients.

On the server side, you must ensure that there are enough NFS kernel threads to handle the number of connections by the clients. You can determine whether or not the default is sufficient by looking at RPC statistics using nfsstat on the NFS client:

# nfsstat -rc
Client rpc stats:
calls retrans authrefrsh
3409166 330 0

Here you can see that the retrans value is quite high, meaning that retransmissions were often necessary since the last reboot. This is a clear indication that the number of available NFS kernel threads on the server is insufficient to handle the requests from this client. The default number of threads for rpc.nfsd to start is typically eight threads.

To tell rpc.nfsd to use more kernel threads, the number of threads must be passed as an argument to it. Typically, most distributions will have a file such as /etc/sysconfig/nfs to configure this; on a Mandriva Linux system, the configuration item RPCNFSDCOUNT in /etc/sysconfig/nfs is used to determine the number of kernel threads to pass to rpc.nfsd. Increase this number -- perhaps to 16 -- on a moderately busy server, or increase up to 32 or 64 on a more heavily used system. Re-evaluate using nfsstat to determine whether or not the number of kernel threads is sufficient; if the retrans setting is 0 then it is enough; but, if the client still needs to retransmit, increase the number of threads further.

On the client side of things, remote NFS mounts should be mounted with the following options:

rsize=32768,wsize=32768,intr,noatime

By default, most clients will mount remote NFS file systems with an 8-KB read/write block size; the above will increase that to a 32-KB read/write block size. It will also ensure that NFS operations can be interrupted if there is a hang and will also ensure that the atime won’t be constantly updated on files accessed on remote NFS file systems.

If NFS file systems are mounted via /etc/fstab, make the changes there; otherwise, you will need to make them to any configuration files belonging to your chosen automounter. In the case of amd, the /etc/amd.net file would look like:

/defaults fs:=${autodir}/${rhost}/root/${rfs};opts:=nosuid,nodev, rsize=32768,wsize=32768,intr,noatime
* rhost:=${key};type:=host;rfs:=/

By tweaking the defaults of NFS servers and clients, you can make using NFS faster and more responsive, particularly if you make heavy use of NFS file systems.

Labels: ,


Tuesday, October 17, 2006

 

XP and Linux dual boot using NTLoader and Grub

1. my system
Two HD.
hda for Windows XP.
hdb for linux (centos)

sample grub.conf
. 生成grub的配置文件/boot/grub/grub.conf
menu.conf的内容如下:
######### beginning of menu.conf ####################
default=1
timeout=4
#splashimage=(hd0,3)/boot/grub/splash.xpm.gz # 想要漂亮的启动画面,去掉行首"#"
title Red Hat Linux (2.4.12)
root (hd0,3)
kernel /boot/vmlinuz-2.4.12 ro root=/dev/hda4
title Red Hat Linux (2.4.14)
root (hd0,3)
kernel /boot/vmlinuz-2.4.14 ro root=/dev/hda4
######### end of menu.conf ####################

3. 安装grub至Linux分区boot
将grub的stage1安装到/dev/hdb1的boot扇区(hd1,0). 过程如下:

/sbin/grub (运行grub)
grub> install (hd1,0)/grub/stage1 d (hd1,0) (hd1,0)/grub/stage2 p
(hd1,0)/boot/grub/menu.conf

(注意,上面"grub>"为grub的提示符,其后内容写在一行上.)

4. 取得grub的boot信息
过程如下:
dd if=/dev/hdb1 of=/grub.lnx bs=512 count=1

这样得到grub的引导信息,只要用NT Loader来加载它就行了.

5. 将上面得到的grub.lnx弄到Windows的C盘根目录下
可以先把grub.lnx弄得软盘上,然后启动windows,拷贝到C:\; 情况允许也可以直接
在Linux下拷贝到C:了. 我的C盘(即设备/dev/hda1)为FAT32, 可以直接从Linux下弄
过去了. 如下:

mount -t vfat /dev/hda1 /mnt/c
cp /grub.lnx /mnt/c
umount /mnt/c

6. 修改NT Loader的boot.ini
在其中加入一行: C:\grub.lnx="Redhat Linux - GRUB"
加入后boot.ini的内容如下:

[boot loader]
timeout=5
default=C:\boot.lnx
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\PNT40W="Windows xp
.00"
multi(0)disk(0)rdisk(0)partition(1)\PNT40W="Windows xp
.00

[VGA mode]" /basevideo /sos
C:\grub.lnx="Redhat Linux - GRUB"

Monday, April 24, 2006

 

Learn these advanced vim features

By Vincent Danen, TechRepublic
10 Apr 2006


One extremely convenient feature of vim is its ability to fold text. With this feature, you can make parts of a text file disappear, without actually removing it.

Think of it as getting unimportant stuff out of the way--a real boon for developers who want to make functions disappear that aren't important to what's being worked on at the moment.

In order to fold a section of text, place the cursor on the line to start the fold and press V when you're in command mode (not in insert mode). Move the cursor to highlight the selection of text you wish to fold. Once everything is highlighted, type zf in command mode, and the text will disappear and be replaced by text like the following:
+-- 16 lines: function print_help()----------------------------------

To expand the fold, type zo or use the right arrow key with the cursor on the folded line. The summary line will be replaced with the hidden text. To re-close the text, type zc anywhere within the fold; vim remembers the settings for the fold and will fold it appropriately.

Another neat feature here is that if you make changes to the fold and add or remove text, vim still remembers the start and end markers and will fold it accordingly. For example, if the original fold was 16 lines and you added four lines, vim will fold all 20 lines, provided the added lines were within the previously defined fold.

To get more help on vim's folding features, type :help folding.

Another quick tip on vim is its abbreviation feature. With it, you can create abbreviations for text. By typing this in command mode:
:abbr ap Apache Web Server

Anytime that you type ap[space] in your document, it will automatically expand to the "Apache Web Server". This is a useful way of automatically correcting often misspelled words, for example, :abbr hte the and so forth.

If you need to sort a block of text in vim, you can easily do so by highlighting the text to sort (pressing [V] in command mode and moving the cursor to select the text) and, when all the text to sort is highlighted, typing !sort. You can do all kinds of nifty things here because vim is using the external program "sort" as a filter, so to do a reverse sort you would use !sort -r instead. You can use any text-filtering or transformation program here that you like.

vim has more features than you can shake a stick at; these are just a few samplings of how versatile vim really is.

 

Extend MediaWiki with custom extensions

By Vincent Danen, Tech Republic
MediaWiki is a great PHP-based wiki implementation that is used to power many sites, including Wikipedia. It's quite flexible, very secure, and extremely easy to use. It's also extremely simple to write plugins for it.
Assume for a moment you want to include the contents of an external file in your wiki. The file changes often and the wiki must reflect the current state of the underlying file. You could write a quick extension to handle this. Extensions live in the extensions/ subdirectory and while there are many extensions you can download, if you have a basic understanding of PHP, you can write your own custom extensions that will look seamless on the wiki.

The extension in Listing A will display the contents of the file passed to it via the wiki page (more on that in a moment); the file must be readable by the Web server user.

Keep in mind that this extension could be dangerous if you don't filter the input at all. In this code, we explicitly test to see if $input is one of two files: /var/lib/foo or /tmp/status. If it is neither, the $file variable is not set and remains NULL; If the passed filename is either of the two allowed files, then it is retrieved and returned to MediaWiki to display.

Add the following to LocalSettings.php to include the extension file (assuming it's called plugin_file.php):

include("./extensions/plugin_file.php");
Finally, to use this plugin in MediaWiki itself, create or edit a page in the wiki with the following code:

/tmp/status
We defined as the code to use in the plugin_file.php via the $wgParser->setHook call; we defined it as file with the associated function being renderFile, which we then defined.

As you can see, this is extremely simple, and being able to add your own custom PHP code to a site gives you limitless possibilities of extending your wiki.

Friday, April 21, 2006

 

Why technical support isn't working

By Jonathan Yarden, TechRepublic
22 Mar 2006


I've recently read several articles that were disparaging toward technical support departments. While some of the issues are indeed valid, I must point out that it's easy to blame someone else when computers don't work the way they should.

Technical support is failing on a number of levels, but it's not because of a lack of effort or ability. This field has changed drastically since the emergence of the Internet, and some problems simply aren't fixable.

Internet service providers are currently bearing the brunt of technical support services, and the profit margins for most ISPs limit the staffing of call centers. The rash of recent problems is no comfort to IT departments that must support an entire enterprise either.

No matter how you look at it, supporting computer systems when the Internet is involved has never been an easy job, and it's becoming increasingly difficult--especially when you're dealing with a daily barrage of viruses, worms, Trojans, and all sorts of spyware.

Technical support relies heavily on users' abilities to perform tasks, and we're all more than familiar with the difficulty involved with assisting inexperienced computer users. Most widespread worms and viruses take hold and spread due to poorly maintained systems, commonly home systems found on broadband networks.

Since I can't help these users directly, I must rely on their ISPs to help me fix the problem because it affects my network. I see evidence of worms and viruses coming from other networks all the time, but I'm powerless to fix the problem.

Technical support is failing on a much greater level than most people know. It surprises many people when they find out there's almost no coordination among Internet service providers.

ISPs are on the front line of the Internet, but there's no central method for support pros to communicate and contact high-level technical support in the event of a problem. I see tens of thousands of port scans daily--mostly due to worms--coming from big companies, small companies, universities, and cable and DSL networks. Sometimes I contact the people who manage these networks and tell them there's a problem, but most of the time I can't.

So I move upstream and try their ISP. Still, I can't help fix the problem because many ISPs' automated problem report systems reject my e-mail since I'm not one of their customers. Or worse, when I call to report a problem, I encounter arrogance and attitude.

My main complaint about technical support is that IT professionals are simply not working together as a team in a worldwide manner. Regardless of which ISP or corporation you work for, if you're involved in high-level technical support or you're a person of authority with a major ISP, I'd like to have your contact information. Who knows--maybe if enough of us agree to work together as a team and respond to each other quickly when a problem emerges, we can make a dent in the wasted bandwidth on the Internet due to wide-scale worm and virus activity.

We are the ones that others depend on to fix their problems. So how about working together and treating the entire Internet as our network for a change? We could accomplish more by helping each other instead of pointing the finger at someone else.

 

Mobile browsing becoming mainstream

By Candace Lombardi, CNET News.com
Wednesday, April 19 2006 08:24 AM

A global increase in cell phone ownership and a rise in the use of wireless services by people over 35 may lead cell phones to dominate Web browsing, a new study says.

Ipsos Insight's 2005 "The Face of the Web" study shows significant increases in: ownership of mobile phones, mobile surfing by mainstream users, and adoption of wireless mobile technology by adults aged 35 and older.
Advertisement

90 percent of households in Japan, South Korea and urban China own cell phones, as do 80 percent of households in Western Europe, 60 percent in Canada and three out of four households in the U.S.

In 2005, 28 percent of those mobile phone owners used their phone to browse the Internet, up from 25 percent the year before. More significantly, the increase is driven by adults aged 35 and older joining younger users in this habit.

"This older age group is really starting to explore more on their cell phone and getting comfortable. Whether it's text messaging, e-mailing or Internet browsing, our research has found that they are using their cell phones for more than just voice calling," Adam Wright, a senior research manager at Ipsos Insight, told CNET News.com.

27 percent of adults aged 35 to 54 who are living in households with cell phones claim to have browsed the Internet on their phone. Only 21 percent of them did in 2004, according to Wright. Twelve percent of those 55 and older also engage in mobile browsing.

These statistics have significant implications for "m-commerce" (the mobile equivalent of e-commerce) in that older demographics traditionally have more spending power as consumers. These groups also showed the largest growth.

Cell phone owners aged 18 to 34, while still the largest adopters of Web browsing, did not grow from their 36 percent.

By country, France and the U.S. showed the most growth for browsing from a wireless device. Japan closely followed. Four out of 10 Japanese cell phone owners do mobile browsing; double the number in 2003.

The study also found a correlation between comfort with cell phone use and comfort with the Internet. In 10 out of 12 global markets studied, 90 percent of people who had accessed the Internet in the last 30 days owned cell phones.

 

Help fellow developers by writing useful documentation

By Tony Patton , TechRepublic
12 Apr 2006


Every developer knows what it's like to have to make changes to an existing application. It is the true test of a developer's skills to reverse engineer a previous team's thinking. One thing that can really hinder this process is when there's no documentation and all the developers involved with the application are no longer around.

So, if developers are aware that even basic documentation can be helpful, why do we often overlook this task? Most developers I know admit that they just don't have enough time to write documentation. However, this mundane task can actually save you or your fellow developers time down the road.

If you need to write documentation, check out my tips on what it should include and where to find tools that will assist you in this process.

Describe an application's core elements
It can be a bit overwhelming to start composing a document that outlines the technical details of an application. One way to make the process less daunting is by first looking at the four core elements that are in every application: data, business rules, user interface, and security. Here are more details about each element:

* Data: The backend data utilized by a system can be as simple as a few tables or as complex hundreds of tables as well as stored procedures, views, and so forth. An up-to-date data model/diagram can save hours of time when faced with a new system.
* Business rules: Current approaches to system design place business rules in their own objects, which are separate from the data and the user interface, but this isn't often the case. A simple document outlining a system's business rules can greatly enhance the understanding of what an application actually does.
* User interface: I think the user interface is one of the easier application features to understand, primarily because you get a good feel for it by simply loading and using the application. Custom controls or third-party tools are often used, so the learning curve is often steeper. A good approach to documenting the user interface is to provide a list of the forms and other elements utilized in the system, along with a detailed description of each.
* Security: Developers often overlook this feature when documenting a system. As you browse through existing code, you'll undoubtedly notice security checks applied in various areas, but it is hard to grasp an overall understanding of an application's security model.

Enhance documentation with pictures
It's easy to enhance the four core elements with diagrams and figures. Database systems like SQL Server make it easy to churn out a data model via the Diagrams element within the SQL Server Enterprise Management client. The Diagrams feature assumes the database was properly designed (primary and foreign key relationships) so it can properly identify relationships. If primary and foreign keys are not appropriately defined, relationships are apparent between the elements, thus making it hard for the system to generate a helpful diagram. You should use a tool like Visio or your favorite word processor to create diagrams for your system as you build it.

Screen captures are a great way to document the user interface and provide an overview of a page. You should annotate screen captures to point out various functions. Tools like AnyDoc Software's CAPTUREit simplify the process of grabbing and working with screen captures.

Unified Modeling Language (UML) has evolved into the standard for describing a system with its various diagrams and symbols. UML diagrams include: architecture, sequence, and class.

* An architecture diagram describes the overall system, detailing the various layers of the system and how different users interface with them as well as business rules.
* You can use sequence diagrams to document processes.
* Class diagrams outline system objects.

While IBM Rational provides powerful (and expensive) UML tools, you can create UML diagrams using other products such as Visio.

Insert comments in your code
The most basic form of documentation is code comments. The .NET Framework uses the Java approach to using XML style comments in C#. Then, a developer can use a command-line tool to extract these comments and create basic system documentation of code elements. If the developer properly inserts comments in the code, it's possible to extract object relationships from the comments.

While it's ideal to insert comments as you develop, it can also be useful to insert comments when you make changes to the code, in order to let other developers know what you change and why. These comments are particularly useful for resolving problems that occur after a code change (source control can also address that problem).

Check out these commercial tools
Compiling your own document can be time-consuming. There are various commercial products available to automate the process of documenting your code. Here are two I recommend:

* ASP.NET Documentation Tool: Offers rudimentary code documentation by providing a simple snapshot of all code elements from the source. Similar items like forms, controls, and such are grouped together. (A similar tool is available for SQL Server.)
* PrettyCode.Print for .NET: Provides an easy way to generate hard copies of source code that are readable. The output is formatted with line numbers and so forth.

These tools simply spit out existing code and comments in a readable format. They don't (and cannot) insert any comments on the "how and why" questions developers usually hear when they're examining an application.

Other resources
Microsoft's patterns & practices site has a great book on building secure ASP.NET applications. It includes various security model diagrams that you may use as a starting point for documenting your own system.

You should also check at your favorite bookseller for books focusing on technical documentation and UML. There is plenty of information available for those wanting to know more.

Think of your fellow developer
I feel obligated to create documentation when I think of how much my effort will help another developer do his or her job. Even if you don't have time to create as detailed documentation as you would like, a simple document explaining what the system is supposed to do and why it was built can go a long way in helping to answer questions and resolve problems in the future.

Tony Patton began his professional career as an application developer earning Java, VB, Lotus, and XML certifications to bolster his knowledge.

 

Do simple preparation before estimating work

By Tom Mochal, TechRepublic
05 Apr 2006


The first time you know for sure the cost of a project is after the project is completed. Of course, you can't wait for the project to be completed to provide some sense for the resources that are required--that's something you need to do upfront.

Your estimating process will be much smoother and the resulting estimate more accurate if you prepare first. Before you begin the estimation, consider the following areas of preparation.

Get a clear picture of the work that is being estimated
Many (perhaps most) of the problems that you have with estimating result from not being really sure what the work entails. You should avoid estimating work that you don't understand. If you're getting ready to start a project, you should know enough so that you can estimate the work to within plus or minus 10 percent. If you can't estimate the work to this level of confidence, you should spend more time investigating and understanding the work. If the work is just too large to be able to estimate at that level of confidence, consider breaking the project into smaller pieces so that you can estimate each smaller project to within 10 percent.

Determine who should be involved in the estimating process
The project manager may or may not know enough to make the estimates on his own. It is usually a good practice to look for estimating help from team members, clients, subject matter experts, etc. For instance, there may be experts in your company that can provide more insight into the level of effort required to complete certain work. This will usually result in the estimates being far more accurate than you would get by yourself.

Determine if there are any estimating constraints
If there are estimating constraints, it's important to know them upfront. For instance, if the end date is fixed (timeboxed) by some business constraint, make sure you know this going in. Likewise, if your client expects six-sigma quality in the deliverables, you will need to estimate higher. If you have a fixed budget you can make sure that the scope of the project can be achieved within the budget. Knowing all of your constraints will help you make valid tradeoffs regarding the cost, duration and quality balance.

Utilize multiple estimating techniques if possible
There are many techniques that can be used when you get ready to do an estimate. These techniques include analogy, expert opinion, modeling, etc. Where possible, try to use two or more techniques for the estimates. If the estimates from multiple techniques are close, you'll have more confidence in your numbers. If the estimates are far apart, you should spend more time rationalizing the numbers to get them consistent. For instance, if you estimate the work at a detailed activity level, but your estimate is not consistent with an expert opinion, that should tell you to spend more time reconciling the two approaches until you achieve more consistency.

When you are asked to estimate a piece of work, resist the urge to jump right in. If you spend just a little time planning, you will find that the entire estimating process will be quicker and will result in a more accurate final estimate.

 

12 qualities of successful tech support pros

By Becky Roberts, TechRepublic
29 Mar 2006


Take a look at a typical tech support job description, and you'll find a list of fairly standard skills and responsibilities: Installs, tests, and maintains PC and network hardware and software systems; establishes and maintains a parts inventory for personal computers; produces support documentation... and so on. But being a successful tech support requires more than the ability to perform a diagnostic test or image a workstation. It requires the appropriate attitude and aptitude. And while skills and knowledge can be taught, attitude and aptitude cannot--they have to be selected for when the tech is hired. The following is a list of traits that support the attitude/aptitude side of the equation.

1: Respect for all users, team members, and superiors--even when it's not reciprocated
Showing respect is an acknowledgement of another person's value and knowledge, an essential quality of a tech support staff. If the users don't believe that a tech support person takes their problems seriously, they'll be less willing to communicate and they'll lose confidence in that particular IT support staff, the equipment, and the IT department as a whole. It's particularly important for the tech support professional to have sufficient composure to remain respectful even when on the receiving end of verbal abuse from an angry, stressed, and frustrated user. Although the user's problem may seem trivial from the tech's perspective, all that really counts is the user's perception of the problem, and that's what tech support needs to address.

2: Self-discipline
Being self-disciplined affects several aspects of the job, such as setting and adhering to a schedule, reliably meeting deadlines, delivering resolutions to the end users on or before the promised date/time, and sticking with a task until it's complete. Self-discipline goes hand-in-hand with respecting users; by making deadlines a priority, the support tech is demonstrating respect for the user's time. Self-disciplined support techs are more reliable, dependable, punctual, and able to handle more responsibility than their less-disciplined counterparts.

3: The ability to effectively prioritize tasks
If tech support staff are given any degree of control over scheduling their time, they must be able to prioritize their tasks. Effective prioritizing requires the staff to have detailed knowledge of each employee's role in the organization, a thorough understanding of the nature of the business, and a firm grasp of the business priorities. The rank and/or job function of the employee requesting assistance should usually figure as a major factor in prioritizing assignments. Assuming the environment is conducive to their doing so, tech support staff should do everything within their power to learn the business so they can gain the knowledge necessary for effective prioritizing.

4: Dedication and commitment to problem resolution
Tech support staff must be committed to seeing the problem through to resolution, which occurs only when the user is satisfied that the problem has been resolved--and when the solution is permanent and conforms to company policy. Consider the following example: A user reports that he can't run a recently installed application. As a step in diagnosing the cause of the problem, the tech elevates the user from restricted to full administrative access to his machine. The user can now run the application, but the work order is not complete, as company policy requires the user to have restricted access. The user is under tremendous pressure to ship an urgent order, so the tech decides to allow him to finish processing the order with administrative privilege. If the tech is not committed to complete problem resolution, it would be easy to simply close the work order and move on, violating the company security policy. Support techs must be both willing and capable of following all the steps in a procedure even in a crisis situation, pursuing loose ends when necessary.

5: A detail-oriented working style
Paying attention to the details is essential for the successful completion of a work order. Although resolving a problem to the satisfaction of the user is necessary, it's not a sufficient condition for a work order to be considered complete. For instance, in the previous example, the tech support staff still needs to determine the cause of the problem, fix it, document it, and restore the user to his usual status. The longer the tech takes to do this, the more problems could arise. Paying attention to the details helps ensure a consistent, secure, and reliable computing environment.

6: The ability and willingness to communicate
In many organizations, the tech support staff is the most visible member of the IT department, in daily contact with the end users. In this role as representative of the IT function and as intermediary between IT and end user, effective communication is critical. The staff basically has to serve as a Babel Fish, translating between Tech-ese and Human. He or she must learn to listen to users, acknowledge the reality of their problems, translate their descriptions into technical terms, fix the problems, and explain the solutions in terms the users can understand.

7: The willingness to share knowledge with team members, superiors, and users
One specific aspect of the staff's communications skills is a willingness to share knowledge. Some employees attempt to attain job security through the possession of unique knowledge. This is misguided, as most employers are aware of the vulnerability this creates and will seek to rid themselves of such employees. The willingness to share knowledge is an essential part of being a team member. Most tech support professionals work under great pressure, with little time for research or training, so they often depend upon other team members for the advancement of their knowledge. In addition to sharing knowledge with peers, tech support staff should be willing to educate their users. Training users to make effective use of their applications and peripherals and teaching them to accurately report computer problems will help reduce user downtime and speed problem resolution.

8: A humble attitude about knowledge limitations
Tech support professionals should recognize that they'll never know everything about an issue--the key is to know where to look for information and resources and to be willing to ask for help when they need it. They must be prepared to read manuals and take correction from others. It takes a certain humility to crack open a manual, go to a colleague for a solution, or press [F1].

9: The ability to learn from experience and from informal/formal instruction
After years of school and technical training, it's all too easy for one to relax his drive to learn, assuming that now that he is employed in his chosen profession, he has all the knowledge needed to perform the job function. This may be true in certain environments, but if the tech support professional ever wants to change positions and/or companies, he or she will soon find that the knowledge is out-dated and of limited use. Rapid change is an inherent characteristic of information technology, and those who want to remain productive within the industry must actively seek out every opportunity to further their knowledge, whether through formal training by attending classes or simply by reading, participating in forums, and asking questions of co-workers.

10: The ability to think logically and creatively
Apply a consistent, logical methodology to the resolution of computer problems. This means that even when confronted with new situation, tech support staff will stand a good chance of being able to resolve the problem, or at least isolate the problem area. To back up their logical thinking, he or she also must be able to make creative leaps in reasoning when the application of logic fails to produce a satisfactory resolution.

11: The ability to apply knowledge to new situations
This ability goes along with being a logical, creative thinker to form the essential nature of an outstanding troubleshooter. Some people I've worked with are excellent at following prescribed procedures in familiar situations but are completely stymied when confronted with an alien situation. Being able to adapt specific knowledge to new situations is extremely important; in most environments, it would be impossible to train the tech support staff in every possible scenario. The very nature of troubleshooting requires the ability to transfer knowledge.

12: A demonstrated independent interest in technology
I'm almost hesitant to include this as an essential attribute of a support tech, as I once walked out of a job interview when I was told they were seeking a candidate who "lived, breathed, slept, walked, and talked technology." In my experience, this type of person often makes a lousy tech support staff, due to a lack of interpersonal skills. Having said this, I still maintain that if the techie has no independent interest in technology and just regards it as a job, it will be an ongoing battle to keep the tech up to date with the latest developments or to elicit any form of enthusiasm or excitement for the work. Having a techie who is engaged and excited about new technology becomes particularly important during a rollout, where he or she is uniquely positioned to influence users' attitudes toward the changes in their environment. Rollouts can cause considerable stress to users who are now required to learn a new product to perform their job function. Having a techie who is excited and engaged with the new product will encourage and reassure the users.

 

How much database do you need?

By Deb Shinder, TechRepublic
13 Apr 2006

Companies of all sizes depend on databases--organized collections of electronic information stored on one or more computers in a systematic way--to function and do business.
Almost every business function relies on databases. The personnel department needs a database of employee information. The sales department needs a database of the company's products. Even the IT department itself relies on databases such as Active Directory to store information about the users, computers and resources on the network.
A database can consist of a single table (collection of information) or multiple tables of related information that can be linked to each other (called relational databases). The tables are linked via a field that they both have in common. Database software can range from the simple Microsoft Cardfile.exe program that was built into the Windows 3.x operating systems to more sophisticated but relatively inexpensive relational database programs such as FileMaker Pro or Microsoft Access all the way up to enterprise-level server-based programs such as Microsoft SQL Server or Oracle.
Making the decision to commit to a particular database program, whether you're implementing a database for the first time or considering a switch, can be a difficult one. There is no "one size fits all" solution, but there are ways you can ensure that you don't outgrow your software too quickly as the size of your business increases.
How much database do you need?
Small businesses may not need or be able to afford the "big guys"--if you can even figure out what they cost. Licensing/pricing structures can be confusing. For example, Microsoft's SQL Server 2005 can be licensed under several different models. With processor licensing, you pay a hefty fee (from US$3,899 for Workgroup Edition to US$24,999 for Enterprise Edition) per physical or virtual processor on which the software runs. The up side is that you don't have to buy Client Access Licenses (CALs) under this model. Alternatively, you can use the Server Plus Device CAL or Server plus User CAL model that costs only US$739 to US$13,969 for a set number of CALs (5 for Workgroup and Standard Editions, 25 for Enterprise) plus US$146 to US$162 per additional device or user. In fact, it's so confusing that Microsoft has even put out a whitepaper on Understanding Database Pricing.
Oracle's pricing structure is similarly complex. There are three main editions: Enterprise, Standard and Standard One (for single CPU servers). Each has different features and prices. Per-processor licenses range from US$4,995 to US$40,000. Named user licenses range from US$149 to US$800.
If you're a small company, then, what are your options? How can you set up a database that serves your needs now, without spending a significant (albeit lesser) amount of money on something that you'll have to trash in the future as your needs grow? There are actually several ways you can go:
* If your database needs are small and confined mostly to individual users, you can use Microsoft Access, which comes with Microsoft Office Professional an Small Business Management editions or can be purchased as a standalone product for under US$200. An advantage of this approach is that if you later implement a Microsoft SQL Server database, Access can be used as the "front end" (the interface with which users access the data on the "back end" SQL server).
* Use an open source database program such as MySQL, PostgreSQL or Borland Interbase 6.0. Some of these run on Linux/UNIX and some run on Windows.
* Use a less expensive server-based product such as FileMaker.
Commercial, custom or "roll your own"?
A generic database program such as the ones we've been discussing allow you to design the structure of your database and create the data entry forms that will be used to enter the information into it, as well as including tools to sort and manipulate the data and ask questions about the data (query the database). Many include programming or macro languages that make it easy for you to automate the functions, as well as templates, sample databases and wizards that can walk you through the process of creating your databases and forms.
You can buy commercial applications built on databases that are already created for you, for specific functions or industries. For example, you can buy accounting or finance management software instead of using database software to create a program for managing your money. If your manufacturing company needs a parts inventory program or your city government needs a program for managing police or municipal court records, many companies have already created such programs that they market commercially.
If the commercial programs aren't an exact match for your needs, there are also many companies that will create custom database packages for you after analyzing your organization's data and how you want to be able to manipulate and access it.
It can be far easier, especially if you're a small to medium-sized organization without in-house programmers, to buy one of these ready-made database programs or hire a database programmer to create one for you. However, if you choose that route, it's especially important to keep scalability in mind. If you buy a proprietary program, you may be forced to go back to the vendor--at high cost--if you need changes or upgrades made as your organization grows. And what happens if the database company goes out of business? You could be left with software that can't be upgraded at all. On the other hand, if the commercial or custom program is based on a standard database program such as Access or SQL Server, anyone who's familiar with that program will be able to make changes for you in the future.
Planning ahead for scalability
Planning ahead applies not only to deciding which database software you'll use, but also to how you structure your database. In designing a database, you should consider not just what information you want to enter into it now, but also additional information that you might need to include in the future.
Database architecture is a specialty area that requires a broad knowledge and training in analyzing organizational needs, because the structure of the database will influence how easy or difficult it will be for users to enter information and get the information that they need out of the database.
This is one area where careful planning can save you big bucks and major headaches on down the road.

 

Project Management: Avoid these common estimating traps

By Tom Mochal, TechRepublic
15 Mar 2006


Project managers are asked to provide effort, duration, and cost estimates as a primary part of our jobs. You know then, that the estimation process is partly an art and partly a science. However, once you learn good estimating processes and techniques, you will hopefully be able to move more toward the "science" side of estimating and rely less on the "art" side. One way to move from art to science is to recognize and avoid the common errors and biases that plague the estimating process today. These errors and biases include the following:

Not taking all the work into account
This is perhaps the most common problem, especially with earlier, high-level estimates. You may just miss some major work that you didn't understand to be a part of the project, such as documentation or training. Typically, however, you underestimate the size of deliverables that need to be completed or you do not include all of the activities required to complete the deliverable.

Wishful thinking
Anyone who provides estimates of work knows that there can be pressure from your client to make the estimate as low as possible. Ultimately, the client wants to get what he needs for as little effort (and cost) as possible. In many cases, there is a tendency on the part of the estimator to get caught up in that mindset as well. The estimator ends up "wishing" the work ends up within the client expectations.

Committing to best-case scenario
The client wants it done as quickly as possible. Your manager wants it done as quickly as possible. You think it can be done quickly. However, you get into trouble because you think about what it would take to complete the work if everything goes right. You might even think in terms of a range of effort for the work, but then too often you end up committing to an estimate at the lower, or optimistic, end of the range.

Assuming higher quality work than you can deliver
This error occurs when you think that you can build everything right the first time. (This is similar to the prior best-case estimating scenario.) However, as the project is executing, you realize that your ability to build to a right level of quality the first time is limited, resulting in overages for rework, bug fixes, retesting, etc.

Committing based on available budget
In this case, the client has a fixed amount for the budget. The project manager thinks there is a chance the project team can get it done within available budget, so he commits based on that budget number. Estimating work based on the available budget is so obviously wrong that it's almost a cliché. However, how many times have you fallen into this trap?

Not recognizing estimating biases
Your personal biases can sneak into your estimates. Some are optimistic and some are pessimistic. Optimistic biases will result in underestimating the work and can include:

* Tending to think the work is simple (everything is simple to you).
* Thinking your team can accomplish more than they really can.
* Estimating based on what it would take you to do the work, not the average person.

Pessimistic biases will result in overestimating the work and can include:

* Overestimating the work because you had a bad experience on a similar project in the past
* Overestimating because you don't really want to do the work. You might estimate high and hope the project will be cancelled.

All of us can fall into these estimating traps and biases if we're not careful. Recognizing the problems to begin with can help you avoid the traps when you're creating estimates on your project.

Thursday, April 20, 2006

 

MySQL fills Oracle-consumed hole in database

By Martin LaMonica , CNET News.com
Employees and customers of MySQL got a jolt of concern in October, when Oracle bought Innobase, a small company supplying an important component of the MySQL open-source database.
Now MySQL has a plan to calm any lingering nerves.

The upstart database company is developing its own transactional storage engine, which can effectively be used as a replacement for the Oracle-acquired technology, executives said.

MySQL has also renewed the Innobase contract Oracle inherited. The contract has a term of less than 10 years and calls for Oracle to update Innobase's InnoDB storage engine--as it had been before the acquisition--on the same terms, Marten Mickos, MySQL's CEO.

"Oracle told us that it's business as usual--they don't want to slow us down, and they will fix bugs," Mickos told CNET News.com on Wednesday. "It's pretty good having Oracle as a subcontractor."

The MySQL database can work with different storage engines, including InnoDB. Until now, MySQL has relied on engines written by a third party and bundled with the rest of the database.

Oracle's purchase of InnoDB, which is tuned for business-oriented transaction systems, set off a wave of speculation on Oracle's intentions.

Some wondered whether the move was meant to stall MySQL market momentum or kill a popular MySQL-tied product.

RedMonk analyst Stephen O'Grady said MySQL's decision to write its own storage engine is a direct response to Oracle's purchase of Innobase and Sleepycat Software, another open-source database Oracle bought that works with MySQL.

"Given the shot across the bow those acquisitions represented and the potential for customer disruption, it probably was in MySQL's best long-term interests to control that technology," O'Grady said.

Innobase was a small company with only five employees, and it didn't represent a large financial outlay for a company of Oracle's size.

But having control over Innobase gave Oracle valuable information on how customers use MySQL, and it offered the potential to rattle customers, O'Grady argued.

Compared to database heavyweight Oracle, MySQL is very small--it brought in a little less than US$40 million of revenue in 2005--but it is the most popular open-source database with developers, according to market research firm Evans Data.

Open-source databases, in general, are not as sophisticated as Oracle's flagship database product, but the company is seeing more competition from open-source companies such as MySQL, Ingres and EnterpriseDB.

Like other database companies, Oracle has reacted to the interest in open-source products, which proponents argue can be cheaper than established products.

In February, Oracle released a free version of its database that limits the hardware on which it can run. It also tried to buy MySQL but was rebuffed.

Encouraging plug-ins
The storage engine MySQL is working on will be available this year. The engine derives from MySQL's acquisition of Netfrastructure, which employed database luminary Jim Starkey and other engineers.

"What we didn't tell people when we bought Netfrastructure is that we were getting more than just people. We were also getting software," Mickos said.

At the company's customer conference later this month, MySQL executives will further detail its strategy for storage engines.

The company will disclose partners that are writing their own storage engines for MySQL and further detail its "plug-in" architecture for storage engines, said Zack Urlocker, MySQL vice president of marketing.

The purpose of having different storage mechanisms is to specialize. For example, a third party could create a way to index text documents very well and gain access to MySQL developers.

Separately, Mickos said the company could in the future file for an initial public offering to provide an "exit for its investors" but also said there are no imminent plans. The company raised a third, US$18.5 million round in February of this year.

"We want to remain independent," he said.

Wednesday, April 19, 2006

 

Use milestones to check on the health of your project

By Tom Mochal, TechRepublic
A milestone is a scheduling event that signifies the completion of a major deliverable or a set of related deliverables.
A milestone, by definition, has zero duration and no effort. A milestone is a marker in your schedule. You don't place milestones in your schedule based on a calendar event. In other words, you don't schedule a milestone for the first Friday of every month.

Milestones are great for managers and the sponsor because they provide an opportunity to validate the current state of the project against the overall schedule. Since each milestone signifies that some set of underlying work has been completed, your sponsor should know immediately that your project is behind schedule if a milestone date is missed. The sponsor does not need to know the individual status of all the activities in the workplan. He just needs to keep track of the status of the milestones to know if a project is on schedule or not.

In addition to signifying the status of the project against the workplan, milestones also provide a great way to take a step back and validate the overall health of the project. In particular, the following types of activities can be scheduled for (or at) each major milestone.

Validate that work done up to this point is complete and correct.
Make sure that the sponsor has approved any external deliverables produced up to this point.
Check the workplan to make sure that you understand the activities required to complete the remainder of the project. You did this when the project started, but each milestone gives you a chance to re-validate that you still understand what is required to complete the project.
Double-check the effort, duration, and cost estimates for the remaining work. Based on prior work completed to date, you may have a much better feel for whether the remaining estimates are accurate. If they aren't, you'll need to modify the workplan. If it appears that your budget or deadline will not be met, raise an issue and resolve the problems now.
Issue a formal status update and make any other communications specified in the Communication Plan.
Evaluate the Risk Management Plan for previously identified risks to ensure the risks are being managed successfully. You should also perform another risk assessment to identify new risks.
Update all other project management logs and reports.
These activities should be done on a regular basis, but a milestone date is a good time to catch up, validate where you are, get clear on what's next, and get prepared to charge ahead.

Thursday, January 26, 2006

 

How to manage the document life cycle - Techguides - ZDNet Asia

How to manage the document life cycle - Techguides - ZDNet Asia

This page is powered by Blogger. Isn't yours?