Thursday, March 31, 2005
Got enough DR bandwidth
Monday, March 28, 2005
Use a RADIUS server for biometric authentication
Biometric technology is now cost-effective and functional enough for developers to use it in many application environments.
The Oracle database supports some of the protocols that are commonly used for biometric authentication. This e-newsletter will teach you how to configure biometric authentication without customization.
When researching biometric devices, you need to know that Oracle's SQL*Net authentication layer mainly uses Remote Authentication Dial-In User Service (RADIUS) and client/server protocol (RFC 2138 and RFC 2139). (Oracle 8 did support Identix and SecurID authentication, but Oracle now recommends upgrading to CyberSafe, RADIUS, Kerberos, or SSL.) Many RADIUS servers use LDAP directories to store related biometric data, but there are also many that can store data in a SQL RDBMS.
First, install RADIUS-compliant client software on the same machine as the Oracle database server and each client that will be using this kind of authentication. Both the database client and database server must be able to access the RADIUS authentication server and any client utility windows (prompting for a PIN number or password confirmation) when activated.
On the database client side, you install Oracle Advanced Security and select the RADIUS method. You can do this with the UNIX utility netmgr, or by choosing Oracle | Network Administration | Net Manager from Windows' Start | Programs menu.
Once you install it, you can also manually configure the SQL*Net client to use RADIUS authentication by adding the following line to the local sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
On the database server side, you must generate a radius.key file from the RADIUS server. You should copy this file to the $ORACLE_HOME/network/security directory. Then, you need to use the netmgr program on the database server machine to configure the RADIUS server's host name, port number, timeout, number of retries, and location of radius.key file options. You can do this manually by adding the following lines to the database sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
SQLNET.RADIUS_AUTHENTICATION=localhost
SQLNET.RADIUS_AUTHENTICATION_PORT=(1645)
SQLNET.RADIUS_AUTHENTICATION_TIMEOUT=(15)
SQLNET.RADIUS_AUTHENTICATION_RETRIES=(3)
SQLNET.RADIUS_SECRET=(?/network/security/radius.key)
SQLNET.RADIUS_CHALLENGE_RESPONSE=(OFF)
SQLNET.RADIUS_CHALLENGE_KEYWORD=(challenge)
SQLNET.RADIUS_CLASSPATH
=($ORACLE_HOME/network/jlib/netradius.jar:$ORACLE_HOME/JRE/lib/vt.jar)
SQLNET.RADIUS_AUTHENTICATION_INTERFACE="DefaultRadiusInterface"
SQLNET.RADIUS_SEND_ACCOUNT=OFF
You can replace the string localhost in the example above with the hostname or IP address of the machine running the RADIUS server. Unless specified, the rest of the values are the default settings. If you use PIN or password "challenges" with RADIUS, a small window should pop up asking a user for the information. This is typically in Java; you can customize it for your applications using the SQLNET.RADIUS_CLASSPATH and SQLNET.RADIUS_AUTHENTICATION_INTERFACE parameters.
Next, you need to create or alter database user accounts to use external authentication:
SQL> CREATE USER username IDENTIFIED EXTERNALLY;
SQL> ALTER USER username IDENTIFIED EXTERNALLY;
You also need to modify the database startup parameters (init.ora) to use external/OS authentication with:
OS_ROLE=TRUE
REMOTE_OS_AUTHENT=FALSE
OS_AUTHENT_PREFIX=""
The last two parameters ensure that users cannot connect to the database using OS-authenticated accounts (those starting an "OPS$" prefix by default).
When using biometrics (with optional challenge-response), instead of a username/password to connect to a database, you should always connect to the database using connect /@database or connect / if you configure the database as the default database connect string. Since RADIUS authentication is in the SQL*Net layer, all application programs (even Oracle Forms, Reports, and OCI or PL/SQL programs) will automatically start using RADIUS and biometric authentication.
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
The Oracle database supports some of the protocols that are commonly used for biometric authentication. This e-newsletter will teach you how to configure biometric authentication without customization.
When researching biometric devices, you need to know that Oracle's SQL*Net authentication layer mainly uses Remote Authentication Dial-In User Service (RADIUS) and client/server protocol (RFC 2138 and RFC 2139). (Oracle 8 did support Identix and SecurID authentication, but Oracle now recommends upgrading to CyberSafe, RADIUS, Kerberos, or SSL.) Many RADIUS servers use LDAP directories to store related biometric data, but there are also many that can store data in a SQL RDBMS.
First, install RADIUS-compliant client software on the same machine as the Oracle database server and each client that will be using this kind of authentication. Both the database client and database server must be able to access the RADIUS authentication server and any client utility windows (prompting for a PIN number or password confirmation) when activated.
On the database client side, you install Oracle Advanced Security and select the RADIUS method. You can do this with the UNIX utility netmgr, or by choosing Oracle | Network Administration | Net Manager from Windows' Start | Programs menu.
Once you install it, you can also manually configure the SQL*Net client to use RADIUS authentication by adding the following line to the local sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
On the database server side, you must generate a radius.key file from the RADIUS server. You should copy this file to the $ORACLE_HOME/network/security directory. Then, you need to use the netmgr program on the database server machine to configure the RADIUS server's host name, port number, timeout, number of retries, and location of radius.key file options. You can do this manually by adding the following lines to the database sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
SQLNET.RADIUS_AUTHENTICATION=localhost
SQLNET.RADIUS_AUTHENTICATION_PORT=(1645)
SQLNET.RADIUS_AUTHENTICATION_TIMEOUT=(15)
SQLNET.RADIUS_AUTHENTICATION_RETRIES=(3)
SQLNET.RADIUS_SECRET=(?/network/security/radius.key)
SQLNET.RADIUS_CHALLENGE_RESPONSE=(OFF)
SQLNET.RADIUS_CHALLENGE_KEYWORD=(challenge)
SQLNET.RADIUS_CLASSPATH
=($ORACLE_HOME/network/jlib/netradius.jar:$ORACLE_HOME/JRE/lib/vt.jar)
SQLNET.RADIUS_AUTHENTICATION_INTERFACE="DefaultRadiusInterface"
SQLNET.RADIUS_SEND_ACCOUNT=OFF
You can replace the string localhost in the example above with the hostname or IP address of the machine running the RADIUS server. Unless specified, the rest of the values are the default settings. If you use PIN or password "challenges" with RADIUS, a small window should pop up asking a user for the information. This is typically in Java; you can customize it for your applications using the SQLNET.RADIUS_CLASSPATH and SQLNET.RADIUS_AUTHENTICATION_INTERFACE parameters.
Next, you need to create or alter database user accounts to use external authentication:
SQL> CREATE USER username IDENTIFIED EXTERNALLY;
SQL> ALTER USER username IDENTIFIED EXTERNALLY;
You also need to modify the database startup parameters (init.ora) to use external/OS authentication with:
OS_ROLE=TRUE
REMOTE_OS_AUTHENT=FALSE
OS_AUTHENT_PREFIX=""
The last two parameters ensure that users cannot connect to the database using OS-authenticated accounts (those starting an "OPS$" prefix by default).
When using biometrics (with optional challenge-response), instead of a username/password to connect to a database, you should always connect to the database using connect /@database or connect / if you configure the database as the default database connect string. Since RADIUS authentication is in the SQL*Net layer, all application programs (even Oracle Forms, Reports, and OCI or PL/SQL programs) will automatically start using RADIUS and biometric authentication.
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
Use a RADIUS server for biometric authentication
Biometric technology is now cost-effective and functional enough for developers to use it in many application environments.
The Oracle database supports some of the protocols that are commonly used for biometric authentication. This e-newsletter will teach you how to configure biometric authentication without customization.
When researching biometric devices, you need to know that Oracle's SQL*Net authentication layer mainly uses Remote Authentication Dial-In User Service (RADIUS) and client/server protocol (RFC 2138 and RFC 2139). (Oracle 8 did support Identix and SecurID authentication, but Oracle now recommends upgrading to CyberSafe, RADIUS, Kerberos, or SSL.) Many RADIUS servers use LDAP directories to store related biometric data, but there are also many that can store data in a SQL RDBMS.
First, install RADIUS-compliant client software on the same machine as the Oracle database server and each client that will be using this kind of authentication. Both the database client and database server must be able to access the RADIUS authentication server and any client utility windows (prompting for a PIN number or password confirmation) when activated.
On the database client side, you install Oracle Advanced Security and select the RADIUS method. You can do this with the UNIX utility netmgr, or by choosing Oracle | Network Administration | Net Manager from Windows' Start | Programs menu.
Once you install it, you can also manually configure the SQL*Net client to use RADIUS authentication by adding the following line to the local sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
On the database server side, you must generate a radius.key file from the RADIUS server. You should copy this file to the $ORACLE_HOME/network/security directory. Then, you need to use the netmgr program on the database server machine to configure the RADIUS server's host name, port number, timeout, number of retries, and location of radius.key file options. You can do this manually by adding the following lines to the database sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
SQLNET.RADIUS_AUTHENTICATION=localhost
SQLNET.RADIUS_AUTHENTICATION_PORT=(1645)
SQLNET.RADIUS_AUTHENTICATION_TIMEOUT=(15)
SQLNET.RADIUS_AUTHENTICATION_RETRIES=(3)
SQLNET.RADIUS_SECRET=(?/network/security/radius.key)
SQLNET.RADIUS_CHALLENGE_RESPONSE=(OFF)
SQLNET.RADIUS_CHALLENGE_KEYWORD=(challenge)
SQLNET.RADIUS_CLASSPATH
=($ORACLE_HOME/network/jlib/netradius.jar:$ORACLE_HOME/JRE/lib/vt.jar)
SQLNET.RADIUS_AUTHENTICATION_INTERFACE="DefaultRadiusInterface"
SQLNET.RADIUS_SEND_ACCOUNT=OFF
You can replace the string localhost in the example above with the hostname or IP address of the machine running the RADIUS server. Unless specified, the rest of the values are the default settings. If you use PIN or password "challenges" with RADIUS, a small window should pop up asking a user for the information. This is typically in Java; you can customize it for your applications using the SQLNET.RADIUS_CLASSPATH and SQLNET.RADIUS_AUTHENTICATION_INTERFACE parameters.
Next, you need to create or alter database user accounts to use external authentication:
SQL> CREATE USER username IDENTIFIED EXTERNALLY;
SQL> ALTER USER username IDENTIFIED EXTERNALLY;
You also need to modify the database startup parameters (init.ora) to use external/OS authentication with:
OS_ROLE=TRUE
REMOTE_OS_AUTHENT=FALSE
OS_AUTHENT_PREFIX=""
The last two parameters ensure that users cannot connect to the database using OS-authenticated accounts (those starting an "OPS$" prefix by default).
When using biometrics (with optional challenge-response), instead of a username/password to connect to a database, you should always connect to the database using connect /@database or connect / if you configure the database as the default database connect string. Since RADIUS authentication is in the SQL*Net layer, all application programs (even Oracle Forms, Reports, and OCI or PL/SQL programs) will automatically start using RADIUS and biometric authentication.
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
The Oracle database supports some of the protocols that are commonly used for biometric authentication. This e-newsletter will teach you how to configure biometric authentication without customization.
When researching biometric devices, you need to know that Oracle's SQL*Net authentication layer mainly uses Remote Authentication Dial-In User Service (RADIUS) and client/server protocol (RFC 2138 and RFC 2139). (Oracle 8 did support Identix and SecurID authentication, but Oracle now recommends upgrading to CyberSafe, RADIUS, Kerberos, or SSL.) Many RADIUS servers use LDAP directories to store related biometric data, but there are also many that can store data in a SQL RDBMS.
First, install RADIUS-compliant client software on the same machine as the Oracle database server and each client that will be using this kind of authentication. Both the database client and database server must be able to access the RADIUS authentication server and any client utility windows (prompting for a PIN number or password confirmation) when activated.
On the database client side, you install Oracle Advanced Security and select the RADIUS method. You can do this with the UNIX utility netmgr, or by choosing Oracle | Network Administration | Net Manager from Windows' Start | Programs menu.
Once you install it, you can also manually configure the SQL*Net client to use RADIUS authentication by adding the following line to the local sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
On the database server side, you must generate a radius.key file from the RADIUS server. You should copy this file to the $ORACLE_HOME/network/security directory. Then, you need to use the netmgr program on the database server machine to configure the RADIUS server's host name, port number, timeout, number of retries, and location of radius.key file options. You can do this manually by adding the following lines to the database sqlnet.ora file:
SQLNET.AUTHENTICATION_SERVICES=(RADIUS)
SQLNET.RADIUS_AUTHENTICATION=localhost
SQLNET.RADIUS_AUTHENTICATION_PORT=(1645)
SQLNET.RADIUS_AUTHENTICATION_TIMEOUT=(15)
SQLNET.RADIUS_AUTHENTICATION_RETRIES=(3)
SQLNET.RADIUS_SECRET=(?/network/security/radius.key)
SQLNET.RADIUS_CHALLENGE_RESPONSE=(OFF)
SQLNET.RADIUS_CHALLENGE_KEYWORD=(challenge)
SQLNET.RADIUS_CLASSPATH
=($ORACLE_HOME/network/jlib/netradius.jar:$ORACLE_HOME/JRE/lib/vt.jar)
SQLNET.RADIUS_AUTHENTICATION_INTERFACE="DefaultRadiusInterface"
SQLNET.RADIUS_SEND_ACCOUNT=OFF
You can replace the string localhost in the example above with the hostname or IP address of the machine running the RADIUS server. Unless specified, the rest of the values are the default settings. If you use PIN or password "challenges" with RADIUS, a small window should pop up asking a user for the information. This is typically in Java; you can customize it for your applications using the SQLNET.RADIUS_CLASSPATH and SQLNET.RADIUS_AUTHENTICATION_INTERFACE parameters.
Next, you need to create or alter database user accounts to use external authentication:
SQL> CREATE USER username IDENTIFIED EXTERNALLY;
SQL> ALTER USER username IDENTIFIED EXTERNALLY;
You also need to modify the database startup parameters (init.ora) to use external/OS authentication with:
OS_ROLE=TRUE
REMOTE_OS_AUTHENT=FALSE
OS_AUTHENT_PREFIX=""
The last two parameters ensure that users cannot connect to the database using OS-authenticated accounts (those starting an "OPS$" prefix by default).
When using biometrics (with optional challenge-response), instead of a username/password to connect to a database, you should always connect to the database using connect /@database or connect / if you configure the database as the default database connect string. Since RADIUS authentication is in the SQL*Net layer, all application programs (even Oracle Forms, Reports, and OCI or PL/SQL programs) will automatically start using RADIUS and biometric authentication.
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
Delete Hiberfil.sys in Windows XP before defragmenting
By Greg Shultz, TechRepublic
Monday, March 28 2005 1:29 PM
If you use the Windows XP's Hibernation feature on your laptop, you may want to delete the Hiberfil.sys file from the hard disk before defragmenting. When you put your computer in hibernation, Windows XP writes all memory content to the Hiberfil.sys file before shutting down the system. Then, when you turn your computer back on, the OS uses the Hiberfil.sys file to put everything back into memory, and the computer resumes where it left off. However, Windows XP leaves the Hiberfil.sys file on the hard disk, even though it's no longer needed.
The Hiberfil.sys file, which can be very large, is a special system file that Disk Defragmenter cannot defragment. Therefore, the presence of the Hiberfil.sys file will prevent Disk Defragmenter from performing a thorough defragmenting operation.
Follow these steps to remove the Hiberfil.sys file from the hard disk:
1. Access the Control Panel and double-click Power Options.
2. Select the Hibernate tab in the Power Options Properties dialog box.
3. Clear the Enable Hibernation check box and click OK.
As soon as you clear the check box, Windows XP automatically deletes the Hiberfil.sys file from the hard disk. Once you complete the defrag operation, you can re-enable the Hibernation feature.
Monday, March 28 2005 1:29 PM
If you use the Windows XP's Hibernation feature on your laptop, you may want to delete the Hiberfil.sys file from the hard disk before defragmenting. When you put your computer in hibernation, Windows XP writes all memory content to the Hiberfil.sys file before shutting down the system. Then, when you turn your computer back on, the OS uses the Hiberfil.sys file to put everything back into memory, and the computer resumes where it left off. However, Windows XP leaves the Hiberfil.sys file on the hard disk, even though it's no longer needed.
The Hiberfil.sys file, which can be very large, is a special system file that Disk Defragmenter cannot defragment. Therefore, the presence of the Hiberfil.sys file will prevent Disk Defragmenter from performing a thorough defragmenting operation.
Follow these steps to remove the Hiberfil.sys file from the hard disk:
1. Access the Control Panel and double-click Power Options.
2. Select the Hibernate tab in the Power Options Properties dialog box.
3. Clear the Enable Hibernation check box and click OK.
As soon as you clear the check box, Windows XP automatically deletes the Hiberfil.sys file from the hard disk. Once you complete the defrag operation, you can re-enable the Hibernation feature.
Analyze Apache logs with Analog
If you're looking for a useful log analysis program, check out Analog. This powerful, fast tool creates Web pages based on the analysis of Apache log files.
If your Linux vendor doesn't provide binary packages, you may have to download and install the program from source. After installation, create a configuration file that tells Analog what logs to read and where to place the output.
If installed via RPM or DEB, Analog will typically place a default configuration file in /etc/analog.cfg. Make a copy of this file, and customize it to fit your needs.
Here are the essentials you need to set:
LOGFILE /var/log/httpd/access_log
HOSTNAME www.myhost.com
HOSTURL http://www.myhost.com
OUTFILE /var/www/html/logs/report.html
CHARTDIR /logs/images
LOCALCHARTDIR /var/www/html/logs/images
This tells Analog which log file to analyze, provides information on the host it's analyzing (i.e., hostname and URL), and indicates where to place the report file. (In this case, the resulting URL would be http://www.mysite.com/logs/report.html.) It also tells Analog where to write the image files for the charts that it creates.
Analog creates a very comprehensive output that includes a number of statistics, such as monthly page views, daily and hourly summaries of page requests, most used search requests to reach the site, and more.
For an up-to-date report, run Analog every day by using the following:
# analog -G +g/etc/myanalog.cfg
This assumes your customized configuration file is /etc/myanalog.cfg, and it tells Analog to use the specified configuration file instead of the default configuration file. This comes in handy if you've configured Apache to create log files for different virtual hosts and want a different report for each virtual host.
If your Linux vendor doesn't provide binary packages, you may have to download and install the program from source. After installation, create a configuration file that tells Analog what logs to read and where to place the output.
If installed via RPM or DEB, Analog will typically place a default configuration file in /etc/analog.cfg. Make a copy of this file, and customize it to fit your needs.
Here are the essentials you need to set:
LOGFILE /var/log/httpd/access_log
HOSTNAME www.myhost.com
HOSTURL http://www.myhost.com
OUTFILE /var/www/html/logs/report.html
CHARTDIR /logs/images
LOCALCHARTDIR /var/www/html/logs/images
This tells Analog which log file to analyze, provides information on the host it's analyzing (i.e., hostname and URL), and indicates where to place the report file. (In this case, the resulting URL would be http://www.mysite.com/logs/report.html.) It also tells Analog where to write the image files for the charts that it creates.
Analog creates a very comprehensive output that includes a number of statistics, such as monthly page views, daily and hourly summaries of page requests, most used search requests to reach the site, and more.
For an up-to-date report, run Analog every day by using the following:
# analog -G +g/etc/myanalog.cfg
This assumes your customized configuration file is /etc/myanalog.cfg, and it tells Analog to use the specified configuration file instead of the default configuration file. This comes in handy if you've configured Apache to create log files for different virtual hosts and want a different report for each virtual host.
Tuesday, March 22, 2005
Open-source software still faces IP risks
Monday, March 14, 2005
Get log statistics with AWStats
If you're interested in analyzing log files, a few Web log file analyzers are available. The most widely known programs include Analog and The Webalizer.
However, another tool that contains a vast array of information is AWStats. AWStats is a free Perl program that you can run for real-time log analysis via a CGI script. In addition, you can run it periodically to create static Web pages.
The installation and configuration of this tool is quite simple. The example config file doesn't require much modification. In fact, the only keywords that you really need to modify are the LogFile, SiteDomain, HostAlias, and DirData keywords. After you've created a new file from the copy (e.g., /etc/awstats/awstats.myhost.com.conf) and made these changes, you're ready to begin creating reports.
If you're monitoring a number of sites, you can create a configuration file for each site and write a cron job that runs every day and makes static pages. Let's say that you've set up a directory that will have domains as subdirectories (e.g., /srv/www/mysite.com/html/awstats/mysite.com). For this example, you would view the statistics by going to http://mysite.com/awstats/mysite.com/.
If you're running three Web sites (e.g., mysite.com, yoursite.com, and hersite.com), your script to process the statistics for each would look something like the following:
#!/bin/sh
AWSTATS="/usr/local/awstats/awstats.pl"
AWBUILD="/usr/local/awstats/awstats_buildstaticpages.pl"
for i in mysite.com yoursite.com hersite.com;
do
perl $AWBUILD -config=$i -update -awstatsprog=$AWSTATS -dir=/srv/www/mysite.com/html/awstats/$i
done
Set this script to run every night, and you'll be able to get Web site statistics on all of the Web sites you host updated daily. AWStats writes the "root" page as awstats.mysite.com.html, so it's a good idea to make a symlink of the file that points to index.html to make it even easier to view.
To download this handy tool, visit the AWStats Official Web site.
http://awstats.sourceforge.net/
However, another tool that contains a vast array of information is AWStats. AWStats is a free Perl program that you can run for real-time log analysis via a CGI script. In addition, you can run it periodically to create static Web pages.
The installation and configuration of this tool is quite simple. The example config file doesn't require much modification. In fact, the only keywords that you really need to modify are the LogFile, SiteDomain, HostAlias, and DirData keywords. After you've created a new file from the copy (e.g., /etc/awstats/awstats.myhost.com.conf) and made these changes, you're ready to begin creating reports.
If you're monitoring a number of sites, you can create a configuration file for each site and write a cron job that runs every day and makes static pages. Let's say that you've set up a directory that will have domains as subdirectories (e.g., /srv/www/mysite.com/html/awstats/mysite.com). For this example, you would view the statistics by going to http://mysite.com/awstats/mysite.com/.
If you're running three Web sites (e.g., mysite.com, yoursite.com, and hersite.com), your script to process the statistics for each would look something like the following:
#!/bin/sh
AWSTATS="/usr/local/awstats/awstats.pl"
AWBUILD="/usr/local/awstats/awstats_buildstaticpages.pl"
for i in mysite.com yoursite.com hersite.com;
do
perl $AWBUILD -config=$i -update -awstatsprog=$AWSTATS -dir=/srv/www/mysite.com/html/awstats/$i
done
Set this script to run every night, and you'll be able to get Web site statistics on all of the Web sites you host updated daily. AWStats writes the "root" page as awstats.mysite.com.html, so it's a good idea to make a symlink of the file that points to index.html to make it even easier to view.
To download this handy tool, visit the AWStats Official Web site.
http://awstats.sourceforge.net/
Monday, March 07, 2005
Sync Linux data with a Pocket PC
The ability to sync Palm-based devices with Linux has existed for quite a while. However, as the popularity of Windows-based Pocket PCs increases, there's a growing need to be able to sync data from a computer running Linux with the Pocket PC--without using Windows.
The SynCE Project is working on exactly that. It works with Linux, FreeBSD, and similar operating systems.
While the project is still somewhat in its infancy, a number of add-ons and tools exist that work with popular desktops, such as GNOME and KDE.
In addition, several plug-ins are available that work with programs such as Evolution. However, it's unlikely that many distributions bundle SynCE, so you may need to do some compiling.
You can download SynCE from the SynCE Project's Web site. This Web site also sports a number of documents and tutorials to help walk you through the compile stage. In addition, you can download packages specifically for Red Hat, Fedora, or Debian, or you can build it using emerge on Gentoo.
Another useful tool is MultiSync, which synchronizes PIM data between GNOME-based systems and a Pocket PC. While MultiSync can handle other devices such as the Sharp Zaurus, Palm, and others, it also works with the Pocket PC, provided you use the SynCE plug-in for MultiSync. This program handles the synchronization between Evolution and the Pocket PC, allowing you to synchronize calendars, to-do lists, and contacts.
If you're a KDE user, you can use the KitchenSync tool to synchronize KDE PIM information with your Pocket PC, using the SynCE libraries to handle the connection.
The SynCE Project is working on exactly that. It works with Linux, FreeBSD, and similar operating systems.
While the project is still somewhat in its infancy, a number of add-ons and tools exist that work with popular desktops, such as GNOME and KDE.
In addition, several plug-ins are available that work with programs such as Evolution. However, it's unlikely that many distributions bundle SynCE, so you may need to do some compiling.
You can download SynCE from the SynCE Project's Web site. This Web site also sports a number of documents and tutorials to help walk you through the compile stage. In addition, you can download packages specifically for Red Hat, Fedora, or Debian, or you can build it using emerge on Gentoo.
Another useful tool is MultiSync, which synchronizes PIM data between GNOME-based systems and a Pocket PC. While MultiSync can handle other devices such as the Sharp Zaurus, Palm, and others, it also works with the Pocket PC, provided you use the SynCE plug-in for MultiSync. This program handles the synchronization between Evolution and the Pocket PC, allowing you to synchronize calendars, to-do lists, and contacts.
If you're a KDE user, you can use the KitchenSync tool to synchronize KDE PIM information with your Pocket PC, using the SynCE libraries to handle the connection.
RFID interoperability still an issue
Friday, March 04, 2005
[Oracle]Query data faster using sorted hash clusters
When data is stored in a normal table, the rows are physically stored in allocated blocks in the order in which you insert them into the database.
For example, if you have a table of information about employees, the employees' names would typically be stored in the table in the order in which they were added to the table.
If you have a large number of employees, the table would gradually get slower. You could speed up employee queries if you choose a column that gives a relatively equal distribution of values, such as the department number of the employee, and if you create a cluster table.
In a cluster table, if the employees are in the same department, the rows would physically be stored in the same set of blocks. This makes queries for employees faster since it requires fewer database block reads to retrieve the employees for a specific department. In the non-clustered table, you might have to read every database block to find all the employees.
When you have a large number of keys, you'll start to see performance problems because now you have many cluster blocks. One way to get around this is by providing a hash function to restrict the number of cluster blocks. A hash function takes a numerical value and restricts the range to a predetermined number, but it provides a relatively equal distribution of values. For example, you might create a hash function on the department number that only looks at the last two digits.
One problem with hash functions is that the function value tends to randomize the order that rows are naturally returned. You can usually fix this with an ORDER BY; however, there are cases in which there are a large number of records. Oracle 10g fixes this problem by allowing you to define a "natural order" to data so you can retrieve hash cluster data in the desired order without sorting.
For example, suppose you maintain a database of credit card transactions. You decide that using the credit card number as a cluster key will give you a good distribution of data. But, because there are a large number of credit cards, you use a hash function to restrict the number of cluster blocks. Since you want your data to come back in chronological order for most of your reports, use a sorted hash cluster rather than using ORDER BY in every query.
Here's the syntax:
create cluster credit_cluster
(
card_no varchar2(16),
transdate date sort
)
hashkeys 10000 hash is ora_hash(card_no)
size 256;
create table credit_orders
(
card_no varchar2(16),
transdate date,
amount number
)
cluster credit_cluster(card_no,transdate);
alter session set nls_date_format = "YYYYMMDDHH24MISS";
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050131000123',57.99);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050130071216',16.59);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050131111111',39.00);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050130081001',25.16);
Notice that I use the new function ORA_HASH to create a numeric hash value for the credit card. Now, you can simply query the data for a single credit card, and it automatically comes back in sorted order, like this:
alter session set nls_date_format = "FMDay, Month ddth, YYYY FMHH:MI:SSAM";
select * from credit_orders where card_no = '4111111111111111';
CARD_NO TRANSDATE AMOUNT
---------------- ---------------------------------------- ------------
4111111111111111 Sunday, January 30th, 2005 07:12:16AM 16.59
4111111111111111 Sunday, January 30th, 2005 08:10:01AM 25.16
4111111111111111 Monday, January 31st, 2005 12:01:23AM 57.99
4111111111111111 Monday, January 31st, 2005 11:11:11AM .39
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
For example, if you have a table of information about employees, the employees' names would typically be stored in the table in the order in which they were added to the table.
If you have a large number of employees, the table would gradually get slower. You could speed up employee queries if you choose a column that gives a relatively equal distribution of values, such as the department number of the employee, and if you create a cluster table.
In a cluster table, if the employees are in the same department, the rows would physically be stored in the same set of blocks. This makes queries for employees faster since it requires fewer database block reads to retrieve the employees for a specific department. In the non-clustered table, you might have to read every database block to find all the employees.
When you have a large number of keys, you'll start to see performance problems because now you have many cluster blocks. One way to get around this is by providing a hash function to restrict the number of cluster blocks. A hash function takes a numerical value and restricts the range to a predetermined number, but it provides a relatively equal distribution of values. For example, you might create a hash function on the department number that only looks at the last two digits.
One problem with hash functions is that the function value tends to randomize the order that rows are naturally returned. You can usually fix this with an ORDER BY; however, there are cases in which there are a large number of records. Oracle 10g fixes this problem by allowing you to define a "natural order" to data so you can retrieve hash cluster data in the desired order without sorting.
For example, suppose you maintain a database of credit card transactions. You decide that using the credit card number as a cluster key will give you a good distribution of data. But, because there are a large number of credit cards, you use a hash function to restrict the number of cluster blocks. Since you want your data to come back in chronological order for most of your reports, use a sorted hash cluster rather than using ORDER BY in every query.
Here's the syntax:
create cluster credit_cluster
(
card_no varchar2(16),
transdate date sort
)
hashkeys 10000 hash is ora_hash(card_no)
size 256;
create table credit_orders
(
card_no varchar2(16),
transdate date,
amount number
)
cluster credit_cluster(card_no,transdate);
alter session set nls_date_format = "YYYYMMDDHH24MISS";
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050131000123',57.99);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050130071216',16.59);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050131111111',39.00);
insert into credit_orders (card_no,transdate,amount)
values ('4111111111111111','20050130081001',25.16);
Notice that I use the new function ORA_HASH to create a numeric hash value for the credit card. Now, you can simply query the data for a single credit card, and it automatically comes back in sorted order, like this:
alter session set nls_date_format = "FMDay, Month ddth, YYYY FMHH:MI:SSAM";
select * from credit_orders where card_no = '4111111111111111';
CARD_NO TRANSDATE AMOUNT
---------------- ---------------------------------------- ------------
4111111111111111 Sunday, January 30th, 2005 07:12:16AM 16.59
4111111111111111 Sunday, January 30th, 2005 08:10:01AM 25.16
4111111111111111 Monday, January 31st, 2005 12:01:23AM 57.99
4111111111111111 Monday, January 31st, 2005 11:11:11AM .39
Scott Stephens worked for Oracle for more than 13 years in technical support, e-commerce, marketing, and software development.
Tuesday, March 01, 2005
[ Network Administration]10 ways to improve network performance
[Linux]Who's been in your Linux system?
By Vincent Danen, TechRepublic
Friday, February 25 2005 8:21 PM
Linux is a multiuser system, and that means that more than one person can log into the system at any given time. You can also log into the desktop as well as a console (or even two) at the same time.
It's not uncommon to have more than one user connected to a Linux system at one time. Friends or family can connect remotely via ssh.
Determining who has logged into the system is very simple. You can find out by using a couple of small utilities. The easiest to use is the who command, which displays who currently has logged in and from where.
Here's an example:
$ who
root tty1 Jul 24 10:13
joe pts/0 Aug 1 14:17 (somehost.com)
This shows you that root has logged in on the first tty (console). It also shows that joe has logged in via ssh, connecting from the machine "somehost.com." It also indicates the time when these users logged in.
Another useful tool is the last command, which provides information about when a user last connected to the system. Like the who command, the last command returns the username, where they connected, and when they logged in. It also tells you when they logged out or if they're still connected.
Here's an example:
$ last
joe pts/0 somehost.com Sun Aug 1 14:17 still logged in
Friday, February 25 2005 8:21 PM
Linux is a multiuser system, and that means that more than one person can log into the system at any given time. You can also log into the desktop as well as a console (or even two) at the same time.
It's not uncommon to have more than one user connected to a Linux system at one time. Friends or family can connect remotely via ssh.
Determining who has logged into the system is very simple. You can find out by using a couple of small utilities. The easiest to use is the who command, which displays who currently has logged in and from where.
Here's an example:
$ who
root tty1 Jul 24 10:13
joe pts/0 Aug 1 14:17 (somehost.com)
This shows you that root has logged in on the first tty (console). It also shows that joe has logged in via ssh, connecting from the machine "somehost.com." It also indicates the time when these users logged in.
Another useful tool is the last command, which provides information about when a user last connected to the system. Like the who command, the last command returns the username, where they connected, and when they logged in. It also tells you when they logged out or if they're still connected.
Here's an example:
$ last
joe pts/0 somehost.com Sun Aug 1 14:17 still logged in
Monitor network traffic with ngrep
When it comes to network monitoring, there are a number of available tools out there. However, one tool that administrators often overlook is the network grep (ngrep) tool.
As a network sniffer or monitor, ngrep is very similar in some respects to tcpdump, but it's somewhat different because you can use grep-style syntax to filter what you want.
Ngrep's most basic use is to listen to all traffic on an interface. However, you can extend this quite a bit to narrow down what you're looking for. Ngrep's syntax is similar to that of tcpdump. Here's an example:
$ ngrep port 80 and src host 192.168.5.10 and dst host 192.168.5.100
This monitors all traffic on port 80 from the host 192.168.5.10 to the host 192.168.5.100.
If you're interested in watching Telnet traffic, you can do so using ngrep. You can make it only return traffic that shows a login string by using grep-style syntax. Here's an example:
$ ngrep -q -t -wi "login" port 23
This tells ngrep to look for the string "login" as a word (without case sensitivity) on port 23 for any connection. In this case, ngrep operates in quiet mode so it only prints out matches. In addition, it timestamps them (as designated by the -t option).
Used in conjunction with tcpdump, ngrep can also be very valuable for searching standard pcap dump files to look for patterns. If you have a large dump file from tcpdump, you can use ngrep to examine it by using standard ngrep commands and issuing it an input file with the -I parameter. Here's an example:
$ ngrep -wi "login" port 23 -I /tmp/packet.dump
As a network sniffer or monitor, ngrep is very similar in some respects to tcpdump, but it's somewhat different because you can use grep-style syntax to filter what you want.
Ngrep's most basic use is to listen to all traffic on an interface. However, you can extend this quite a bit to narrow down what you're looking for. Ngrep's syntax is similar to that of tcpdump. Here's an example:
$ ngrep port 80 and src host 192.168.5.10 and dst host 192.168.5.100
This monitors all traffic on port 80 from the host 192.168.5.10 to the host 192.168.5.100.
If you're interested in watching Telnet traffic, you can do so using ngrep. You can make it only return traffic that shows a login string by using grep-style syntax. Here's an example:
$ ngrep -q -t -wi "login" port 23
This tells ngrep to look for the string "login" as a word (without case sensitivity) on port 23 for any connection. In this case, ngrep operates in quiet mode so it only prints out matches. In addition, it timestamps them (as designated by the -t option).
Used in conjunction with tcpdump, ngrep can also be very valuable for searching standard pcap dump files to look for patterns. If you have a large dump file from tcpdump, you can use ngrep to examine it by using standard ngrep commands and issuing it an input file with the -I parameter. Here's an example:
$ ngrep -wi "login" port 23 -I /tmp/packet.dump