SNA Server Set Future Net Protocols For Windows

mssnasvrSNA Server 4.0 extends the product’s already broad reach even further, pairing important new protocol gateway features with innovative data and transaction integration features. In particular, SNA’s new ability to provide seamless access to mainframe transaction code will provide Windows developers with the best of both worlds. For straight terminal access, organizations that don’t want to run IP directly on their host will find SNA Server the most complete option around.

Pros: Rich 3270 and 5250 emulation services; failover and load balancing; allows host COBOL components to be run using Microsoft’s Transaction Server; OLE DB host data access driver; new support for Physical Unit pass-through, compression and LU 6.2 security.

Cons: Server runs only on Windows NT 4.0; no Web-based management of server; OLE DB driver doesn’t provide an API set broad enough for many real-world uses.

Pressing forward on all fronts, Microsoft Corp.’s SNA Server 4.0 beefs up its already impressive SNA protocol gateway support with new tools that make it dramatically easier for Windows programmers to access host resources.

PC Week Labs evaluated a beta release of SNA Server 4.0, expected in the first quarter of next year (prices have not yet been announced), and found the package a soup-to-nuts host connectivity package, offering a wide variety of tools and services that make just about anything on a mainframe or AS/400 easy to access from a PC.

However, shops without Windows clients won’t find nearly as much value in SNA Server 4.0 as Windows-only shops will–OS/2 is particularly well-represented in organizations that require strong host connectivity. New features aside, SNA Server still provides a rich set of SNA gateway features that all organizations will be able to use.

In addition to testing 5250 terminal emulation over a TCP/IP link to an in-house IBM AS/400, we set up access to AS/400 shared folders using SNA Server; we then could access the shared folders from other systems with only the normal Windows networking client installed. SNA Server also supports 3270 connections to mainframes.

In addition to Windows NT, SNA Server provides client libraries for DOS, Windows 3.1 and Windows 95. Macintosh, OS/2 and Unix clients are available from third parties.

Newly supported in SNA Server 4.0 are Physical Unit pass-through, which enables SNA Server to support terminal hardware or printers that must be assigned particular Physical Unit identifiers; compression of SNA packets; and LU 6.2 security, which allows organizations to require that users access the host only through a particular LU connection.

SNA Server’s main competition, IBM’s Communications Server, already supports these SNA features.

SNA Server continues to provide robust failover and load balancing across groups of SNA Servers, a major advantage for organizations that must have host-access gateways that approach the reliability of the host systems to which they connect.

IBM’s Communications Server doesn’t provide as flexible support for failover or TN5250 emulation as SNA Server does–a big drawback for organizations that want to use TCP/IP-based clients to access AS/400 systems. (IBM plans to add TN5250 support in Communications Server 5.1, which is slated to ship at about the same time as SNA Server 4.0.)

Communications Server provides much greater deployment flexibility than Microsoft’s SNA Server, however, because the IBM product runs on a variety of operating systems. Communications Server also provides limited Web-based management, whereas SNA Server does not.

A new OLE DB driver

SNA Server’s new OLE DB driver can access both AS/400 physical and logical files, as well as mainframe sequential access method, virtual sequential access method and partitioned data sets.

We set up an OLE DB link to an AS/400 physical file and were able to browse, search and modify the original file directly from our PC. OLE DB is still a prototype technology, however, and we recommend that organizations hold off until more clients support it and until Microsoft overhauls the ADO (Active Data Object) libraries (which use OLE DB) to add critical core functionality.

In particular, ADO currently lacks (and badly needs) support for Binary Large Objects, user-defined data types, more flexible locking, and more precise date and time handling.

When these features are delivered, ADO will become a more realistic option for real-world development.

A big integration feature that will be put to use immediately is SNA Server’s COM (Component Object Model) Transaction Integrator for IBM’s CICS (Customer Information Control System) and IMS (Information Management System).

We don’t have a System 390, so we couldn’t test this feature ourselves, but what we saw will certainly catch the eye of shops that want to leverage proven debugged mainframe code in their Windows applications.

SNA Server includes a COBOL Wizard, which PC Week Labs used to import source code for a variety of CICS transactions written in COBOL. The COBOL Wizard parsed out CICS communication definitions from the source code and automatically built an equivalent COM object that runs in Microsoft’s Transaction Server. No mainframe code changes are required to support this feature.

Whenever any package calls this COM object, the COM object uses SNA Server to transparently call the associated mainframe code. Two-phase commit support between Windows and mainframe components is automatically provided by Transaction Server.

Transaction calls running in the other direction (from mainframe to PC) aren’t currently supported, a feature that would allow mainframe shops to off-load processing to less expensive PC systems.

Push Technology Changed It All

pushtIn the nearly two years since the first push systems landed on desktops, vendors have sold the technology to almost everyone–everyone, that is, but corporate IT.

To be sure, the concept of pushing data or applications to desktops has caught the eye of users and analysts alike. Startups such as PointCast Inc., BackWeb Technologies Inc. and Marimba Inc. are still enjoying excellent mind share and cash flow from venture capitalists.

But so far, many large companies don’t have push at the top of their to-do lists. For these sites, there are many more immediate concerns: year 2000 compliance, enterprise application deployment and even rolling out intranets.

“Right now, we are using our intranet [to distribute information], and it works just fine,” said Dana Abrams, director of global initiatives at Rockwell International Corp., in Costa Mesa, Calif.

“We have more pressing issues to deal with than getting a push system in place,” Abrams added. “It’s great for getting information off the Web, but internally the need is not there.”

Moreover, IT managers are finding that push is becoming a hard sell to those who could benefit the most: end users.

In the case of Eli Lilly and Co., which has evaluated most push systems, IS managers have not yet found enough interest on the part of internal content providers to warrant serious push deployment, according to David Baker, associate information consultant at the Indianapolis-based company.

“The content owners need to step up to the plate,” Baker said. “We need to have corporate information creating content that will support a Lilly channel. I think this is a case where users are lagging behind the technology. We are having to educate the user community about what they can do with the technology.”

Ironically, that wasn’t a problem when push arrived in February 1996 in the form of PointCast. That client provided a means to corral the chaos of data lying in wait on the Web. (For PC Week Labs’ analysis of the changing role of push technology, go to

PointCast, like other push vendors, sought to turn its early success on the Web inward to intranets. But they’ve met far less success.

Push vendors have been unable to produce the new poster children of push: major corporate customers that have used push to realize a great return on investment. Why not? The fear factor is one main reason.

“PointCast got out there really early and filled a need for providing easier access to information for users,” said Melissa Bane, an analyst at The Yankee Group Inc., in Boston. “But at the same time, there was a lot of backlash. It looked really sexy, but it was difficult for IS managers to control.”

Confusion reigns

In addition, confusion still reigns over a dominant push model. Content aggregators such as PointCast are trickling into corporations through free Web site downloads. But enterprisewide installations are rare. Companies such as NetDelivery Inc., which wants to act as a push outsourcer, are hard to find. And other providers, such as Marimba, are waiting for corporations to recognize their importance.

Add to the mix the increasingly large shadows of Microsoft Corp. and Netscape Communications Corp., which are just now rolling out customizable browser channels. The result: Push vendors are still waiting for their day in the enterprise sun.

IT managers are wary of using push systems for any mission-critical information.

“If I am going to roll out an application to thousands of users, I’m going to use something like [Microsoft Systems Management Server] or Tivoli [Systems Inc.’s network management software], where I have incredible control over [which applications are deployed] and how the larger applications are going to be deployed,” said Edward Glassman, director of IT strategies at Pfizer Inc., in New York.

There are some success stories. McAfee Associates Inc. is using a BackWeb channel to provide automatic updates to its virus scanning software.

That service has won over a number of longtime McAfee customers, including American Family Insurance, in Madison, Wis., and the Tulane Medical Center, located in New Orleans.

Conference Plus Inc. worked with another push vendor, Wayfarer Communications Inc., to deploy a push-based information service integrated with its ACD (Automated Call Dispatch) systems.

Faced with problems in handling calls quickly, Conference Plus used Wayfarer’s Incisa server software to receive information directly from its ACD and deliver it to service representatives on their desktops, according to John Bogaerts, senior manager of integration and implementation at the Schaumburg, Ill., company.

By providing real-time information, such as how long a caller has been on hold, hang-ups from customers went from 10 percent of Conference Plus’ volume to 0.5 percent.

At the Aerospace Company of AlliedSignal Corp., in Peterboro, N.J., administrators are using a BackWeb channel to push engineering data to 3,000 engineers around the world, said Thomas Henderson, manager of IT.

“It occurred to us that there is a need to share engineering information quickly and efficiently,” said Henderson.

“But E-mail is inefficient, list servers are cryptic and Web pages are difficult to update,” Henderson added. “I think there are a lot of possibilities [for push] if you can get your hands around the technology and don’t get caught up in the glamour of it.”

LDAP: Functional, But Never Ready For Prime Time

LDAPThe LDAP tidal wave forced vendors to adopt the standard whether they wanted to or not, galvanizing product development efforts.

Lightweight Directory Access Protocol quickly gained that power because the directory market is small and immature. No single vendor or technology had grabbed the market share and the momentum to drive the market when LDAP came along, so the standard took center stage.

In the messaging market, things are different. Yes, we have Internet standards that have a significant impact. Vendors are lining up to support SMTP, POP3, IMAP4 and LDAP as the Internet becomes the foundation for inter-company communication and commerce.

But the messaging market is relatively mature. Customers have understood the strategic importance of messaging for some time and have deployed lots of products. We also have a small number of well-entrenched vendors with large market shares.

Standards have much larger mountains to move as they attempt to take over corporate messaging systems. For example, the degree to which customers can base messaging systems on pure standards implementations remains a question of priorities.

Today, many pure Internet mail products cannot match proprietary products feature-for-feature because the Internet messaging protocols have yet to gain the functionality an enterprise customer might need. Replication remains a shortcoming, and there’s no standard calendaring and scheduling protocol. And while IMAP4 has a lot of promise, it has drawbacks such as a single-server architecture, the lack of server-based conversation threading, and no support for server-side filters and rules.

This features gap between standards-based and proprietary products will be a factor in buying decisions for some time. Customers can implement pure standard protocols while waiting for them to catch up in functionality or implement proprietary protocols to get functions they need.

That trade-off is also causing vendors to make some interesting choices as they build products. Established vendors such as Lotus, Microsoft and Novell are creating multiprotocol servers that support both proprietary protocols and Internet standards, for example, but they’re competing based on the features their proprietary protocols provide. For customers with large installed bases of proprietary messaging systems, such a multiprotocol approach may be the best migration path toward Internet standards.

On the other hand, vendors such as Netscape are positioning their systems as “pure” standards products. But to remain competitive, Netscape is working feverishly, fostering standards creation, extending current standards to add important functions and using proprietary protocols when necessary. For customers who put standards support before functionality, such an approach may be the best option.

Despite the features trade-off, Internet messaging standards are extremely important. Customers should demand support for them from their vendors, and should plan to deploy them in their organizations.

The effort to create more functional standards is under way. Meanwhile, customers must carefully judge what’s important to them now, and five years from now, as they implement messaging solutions.

Making The Most Of Network Bandwidth

nbwThe switch to gigabit ethernet and other high-speed technologies will eventually increase the bandwidth available to users, but in the meantime, network managers should look for ways to get the most out of the bandwidth they have.

One weapon in the campaign for network efficiency is IP multicasting, now moving from experiments on the Internet’s Mbone (Multicast backbone) toward commercial implementation. IP multicasting can reduce bandwidth demands when groups of users need the same data at the same time, so it’s ideal for such uses as pushing financial data, videoconferencing, streaming multimedia for training and corporate reports, as well as groupware such as shared whiteboards.

The simplest way of transmitting data to multiple recipients is to send a copy of the data to each individual. But this technique, called unicasting, wastes bandwidth–sending a 2M-bps MPEG video to only 10 users would saturate a network pretty quickly.

IP multicasting takes a different approach, letting the source transmit only one copy of the data and depending on multicast-capable routers to duplicate the data whenever more than one recipient is detected downstream. Only one copy of a multicast message passes a router in the network. Copies of the message are made when paths diverge at a router, helping to conserve bandwidth.

Another technique, called broadcasting, is often used in network maintenance. IP multicasting and broadcasting are two examples of the same technique. Just think of broadcasting as a type of multicasting where the group members are everyone on a network, not just certain individuals.

IP multicasting uses Class D IP addresses–those with 1110 as their high-order 4 bits–to specify multicast host groups. In Internet “dotted decimal” notation, host group addresses range from to To send an IP multicast datagram, the sender specifies an appropriate destination address, which represents a host group. Multicast datagrams are then sent via normal IP send operations, which are also used for unicast datagrams.

Although the sending side is quite simple, the receiving side of IP multicasting is more complex. To receive datagrams, an application on users’ workstations requests membership in the multicast host group that’s associated with a particular multicast.

This membership request is transmitted to the user’s LAN router using Internet Group Membership Protocol and, if necessary, is sent on to intermediate routers between the sender and the receiver.

Once this step is completed, the receiving workstation’s network interface starts “listening” for the data-link-layer address that’s associated with the new multicast group address. Routers on the WAN deliver the requested incoming multicast datagrams to the LAN router, which then maps the host group address to its associated data-link-layer address and builds the message using this address.

The receiving link’s network interface card and network driver, listening for this address, pass the multicast address to the TCP/IP protocol stack, which makes the data available to the user’s application.

A need for new routing protocols

Commercial support of IP multicasting is increasing, but there are significant issues to resolve. Mbone, the experimental virtual network overlaid on the Internet for multicasting, has already exhibited the scalability problems that can occur as the multicast network grows. Not only do the participating routers have to deal with the dynamic topological changes common to networks, but they also must deal with the dynamics of host groups as members join and leave.

New routing protocols are being developed and are seeing limited deployment, but more are needed to help ISPs set policies for passing multicast traffic among themselves. Protocols such as Protocol Independent Multicast, Multicast Border Gateway Protocol and Hierarchical DVMRP (Distance Vector Multicast Routing Protocol) are likely to see increasing use, supplanting or at least complementing the original multicast routing protocols, DVMRP and Multicast Open Shortest Path First.

The first implementations of IP multicasting depended on the traditional best-effort delivery method of IP and User Datagram Protocol, but that cannot guarantee reliable delivery of multicast traffic. A variety of protocols (more than 15 at last count) have been proposed for reliable multicast delivery, but no single reliable multicast protocol is yet capable of handling the wide variety of group distributions, the feedback required by the sender, or the various types of applications that use multicast.

Finally, QOS (quality of service) for multicast traffic is an important issue, especially considering the importance of multicasting for distributing multimedia and real-time data on the Internet. Many types of multimedia have special timing and delay requirements that have to be guaranteed by the network if the delivered data is to be useful.

Setting QOS via a protocol like Resource Reservation Protocol can be difficult enough for a session between a server and a single client; problems can increase geometrically when trying to do the same for a multicast session.

Despite the issues that must still be resolved, it’s possible to deploy IP multicasting for business purposes, particularly on intranets and carefully designed extranets. Companies such as Toys R Us and General Motors Corp. are using IP multicasting to distribute software updates and inventory reports nationwide, in some cases cutting transfer times by a factor of 100 or more–for example, from 6 hours, 15 minutes down to 4 minutes.

BankBoston pushes financial data to its traders using multicasting, and Smith Barney Inc. is setting up to transmit live video feeds using multicasting.

Much of the software needed to run IP multicasting is already part of many networks. Most new routers support IP multicasting; routers updated in the past few years might already have built-in multicasting support and need only to be reconfigured to enable the service.

Most workstations also support the necessary protocols in their TCP/IP stacks. Available application software covers the gamut from videoconferencing to push, bulk file transfers and streaming multimedia.

Even if an entire corporate network isn’t multicast-enabled, managers might be able to connect “islands” that support multicasting but are separated by links that do not support it. An approach called tunneling allows managers to encapsulate multicast datagrams in standard unicast datagrams for transmission over nonmulticast-compliant links. This is how Mbone currently operates.

ISPs such as MCI Communications Corp. and BBN Planet Corp. have been experimenting with IP multicasting on a limited basis, and now providers including UUNet Technologies Inc. and @Home Network Inc. have started offering multicast services to their customers, so businesses don’t necessarily have to build a new network from scratch to take advantage of multicasting. Several satellite companies have also added IP multicasting support to their product lines.

Remote Managers Becoming More Important To Network Managers

remsTo overcome the unique challenges of managing a remote network, network managers are using a variety of tools to help them automate data transmissions, reconfigure remote-location PCs, gain access to troublespots on the network and centrally track performance of networks that span multiple locations.

Chick-fil-A is a fast-growing restaurant chain with more than 700 outlets in the United States and Canada, all of which require remote management. With the company’s first international expansion, its chicken sandwiches, strips and nuggets are now being eaten by residents in far-off Durban, South Africa.

Chick-fil-A’s growing business demanded a better method for communicating with its stores than simply having them mail monthly profit and loss statements and then manually entering the data into the financial system from Oracle Corp.

“We needed something that would allow us to support the business and get changes out quickly,” said Mike Erbrick, manager of restaurant information systems for Chick-fil-A Inc., in Atlanta. At each restaurant, the company installed stand-alone PCs equipped with 28.8K-bps modems, a custom restaurant management program and RemoteWare remote access software from XcelleNet Inc., of Atlanta.

“Through its electronic software distribution, RemoteWare provides guaranteed information delivery from the server to the stores, and from the stores back to here,” said Erbrick. “XcelleNet’s niche is the infrequently connected node.”

The company uses RemoteWare to automate the delivery, retrieval and update of information between corporate offices and restaurant locations. On a monthly basis, the restaurants compile profit-and-loss data and send it in an EDI (electronic data interchange) format to OS/2 servers running RemoteWare. The data is then automatically distributed to the company’s Oracle financial system.

“If any errors occur, people are automatically sent an E-mail or paged,” said Erbrick. “That is a function of the work we’ve done with writing to RemoteWare and its API–it’s totally automated,” he said. A profit-and-loss statement is then generated and reviewed by a corporate accountant. Once approved, an E-mail message containing the statement is sent back to the restaurant.

RemoteWare is also used to handle several daily remote-management functions, including daily reports on sales and deposits. Remote sites also use RemoteWare’s E-mail feature; a mail gateway at the corporate site transfers the RemoteWare messages to and from its cc:Mail E-mail system.

“From the RemoteWare desktop application at their location, we essentially control what our operators see and what they have access to on the computer,” said Erbrick. An area of the desktop interface called Subscriber contains documents sent by the corporate offices to all of the restaurants. Using RemoteWare, Erbrick is also able to remotely control the configuration of each PC by automatically comparing their image files with a master image file.

Erbrick credits RemoteWare with the company’s ability to maintain a lean support staff: six corporate IT personnel support the 700 locations.

By year’s end, Chick-fil-A plans to migrate its corporate offices from NetWare and OS/2 to Windows NT. And in early 1998, restaurant cash registers will feed sales data directly to a PC. “Our goal is to capture transactions at their source to get a better understanding of who our customers are and what they are ordering,” said Erbrick.

A quick remote fix

Another RemoteWare user, Toyota Motor Credit Corp., supplements its remote management functions using pcAnywhere from Symantec Corp. “When there is a problem, and we need to look at what is going on with a remote machine, we use pcAnywhere to dial in, grab control and take a look at the problem,” said Jeff Ly, a senior programmer/analyst with the Toyota Motor Corp. division in Torrance, Calif.

Symantec’s pcAnywhere can also be used to update files, transfer files, check E-mail and access office-based applications. Compressor Controls Corp., a Des Moines, Iowa, manufacturer of control systems for turbomachinery used by Standard Oil Co. and other industrial giants, uses pcAnywhere32 8.0 for remote management across its internal network and modem lines.

Bill Dickerson, the company’s network support technician, can dial into the network from his PC at home to do after-hours maintenance. “Using pcAnywhere, I can play around with log scripts and try new software distribution routines without worrying about people logging in and getting messed up,” said Dickerson.

Once, when its PC-based building security system failed, Dickerson was able to resolve the problem remotely. “One weekend, the doors did not lock, so I dialed in and used pcAnywhere to get to the computer and give it a command,” he said.

Automated scripts allow files on Dickerson’s home and office PCs to be synchronized each night while unattended. “pcAnywhere calls into the office machine and compares the directories and files. Whatever differences there are, it makes the office computer files match the ones set up at home–that could be 50 to 60MB of data,” he said. Similarly, while he travels home for the night, Dickerson’s home computer calls into the office to retrieve new and changed files.

Compressor Controls’ Texas sales office is also remotely managed. The remote users can already dial into the network to transfer files and receive E-mail, and scripting is in the works that will automatically update sales database information, said Dickerson.

In addition, the company uses pcAnywhere to access its internal, heterogeneous network. “pcAnywhere acts as a gateway on the network from Windows 95 to NT to 3.1 and among NetWare and NT file servers running different protocols,” Dickerson said.

Probing for remote trouble spots

While RemoteWare specializes in remote synchronization and pcAnywhere’s forte is remote control, another remote management tool measures network traffic. Network General Corp.’s NetXRay runs under Windows 95 and Windows NT to provide remote network monitoring, troubleshooting and analysis.

The Washington State Children’s Administration, a division of the state’s Department of Social and Health Services, uses NetXRay to remotely monitor the performance of networks at 50 sites. “We use NetXRay to analyze how new applications will affect the network, and to keep track of network utilization,” said Stewart Wood, the department’s network manager in Olympia, Wash.

NetXRay was used in a recent data warehouse buying decision. “We had vendors coming in to demo their product, showing us two- and three-tiered approaches to data warehousing,” said Wood. “While they were giving presentations, I was running NetXRay on the network and capturing traffic to find out what exactly was going on with different packets. I found that certain products did not work the way I was told they worked, and we were able to make a better decision.”

Wood is also in the process of measuring the effect of audio and video from the Internet on the company network. “I have 50 probes statewide that keep track of the errors on the network. We connect to those offices using NetXRay’s Console to run reports and graphs on the errors to try and isolate what is going on,” he said.

The state’s 50 children’s services sites are connected on a WAN with dedicated frame-relay lines running at T-1 and 56K bps. Each office has its own NT-based LAN and file server. NetXRay remotely gathers information from the probes to provide centralized packet analysis. Wood’s 16-person support staff can remotely manage 60 servers and 2,300 workstations.

Prior to using NetXRay, Wood had to send IT staff into the field to analyze problems.

“That could take several days to get set up,” he said. “We were not able to resolve problems in a timely manner. [The NetXRay solution] makes it easier for us to isolate errors and resolve them.”

The NetXRay probes are also used to capture packets and monitor Internet access by employees. While the product does not block user access to certain sites, it “can monitor where they’ve gone and notify a supervisor if there is a problem,” said Wood.

Neglecting Skills Can Sink IT Companies

itcompAfter 20 years in systems development at Belk Department Stores Inc., Don Harris has gotten used to taking care of other people’s needs. These days, however, Harris is nurturing something other than Belk’s myriad users. He’s catering to the internal IT organization, hoping a little self-help will go a long way.

Harris, Belk’s first-ever manager of IT staff development, is now in charge of assisting the Charlotte, N.C., retailer’s IT department in taking command of its most coveted resource–its staff. His approach: taking inventory of the group’s skills and creating a skills management database. Armed with this critical data, Harris believes IT can do a better job of hiring, training and retaining employees with the ultimate goal of improving Belk’s business.

Sounds like common sense, but amazingly, carefully monitoring and managing the skills within an IT organization is an area many IT organizations neglect. According to a December 1996 study by Forrester Research Inc., in Cambridge, Mass., only 6 percent of 50 IT executives interviewed kept a catalog of their staff’s skills. Partly to blame for this oversight is IT’s past–in the days of big iron, there simply wasn’t enough variety to warrant this type of skills management. Not so in today’s IT shops, where executives are juggling a constantly changing, diverse set of needs amid a worsening skills crunch. Here, skills management becomes a crucial element in running a healthy IT organization that can deliver crucial projects on time and on budget.

“We aren’t asking the CIO to do anything that any [other part of the business] isn’t doing,” says George Tillman, vice president of the IT consulting group at Booz, Allen & Hamilton Inc., in New York. “This is what planning is all about.”

Getting started

Conducting a skills inventory isn’t very complicated. Information can be gathered through formal or informal surveys, in-person or anonymous interviews, on paper, or online. Belk’s Harris, for instance, is using skills management and assessment software from SkillView Technologies, of Plaistow, N.H., to track the expertise of 115 employees in the company’s systems department, a subdivision of Belk Stores Services’ MIS group.

In addition to SkillView, vendors such as Bensu Inc., Global Knowledge Network Inc., Hewlett-Packard Co. and IBM also sell products called “just-in-time learning tools” that include skills assessment features.

Going this route will require an investment on the part of the IT shop, experts say. According to the Forrester study, companies can expect to spend between $6,000 and $8,000 per person per year on skills management. That includes the expense of deploying the skills assessment software and database, the time employees will spend populating the database, and maintenance. The cost also encompasses compensation and benefits for a skills point person like Harris, who can command between $90,000 to $110,000, depending on the size of the IT organization.

Just throwing money at the problem won’t help, however, unless there’s a goal beyond tallying who’s doing what in the IT group. “You’ll have a very slim chance of succeeding … unless there is some broader process happening,” cautions David Foote, managing partner at Cromwell Foote Partners LLC, an IT management consultancy in Stamford, Conn. For instance, outsourcing, mergers and acquisitions, as well as new alliances or partnerships, can all drive the need for a skills inventory. More subtle issues, such as improving customer satisfaction by doing a better job of allocating staff to IT projects, is another catalyst.

Putting a skills inventory in a larger context also helps calm workers who may feel threatened by the process. “It’s a very bad idea to just announce blindly that [you’re] going to do a skills assessment,” because people immediately worry that their jobs are on the line, explains Foote. The actual inventory should begin only after re-establishing IT’s mission, determining specific goals for the organization and deciding what resources are needed to reach those goals. “In the course of doing that, the writing is on the wall,” says Foote.

Bring in the recruits

In Belk’s case, the retailer decided to inventory the skills in its IT group in part to improve its recruiting efforts. “Charlotte is a really competitive market. … We were feeling the crunch,” says Harris, who went live with SkillView this fall. Harris is using the tool to define models for specific positions that recruiters can match against applicants’ qualifications.

Likewise, managers can proactively find internal candidates for jobs. By keeping abreast of their skills, “we can invite [specific] people [for certain jobs] because we’ll know what their skill set is,” Harris explains.

Harris hopes senior management’s attention to the IT skill pool and career issues will encourage employees to stay with Belk’s IT department longer. To that end, he also plans to use the skills management system to improve the organization’s training offerings. Until now, training was arranged by individual managers who had no way of knowing whether other people in IT might need similar education, Harris says.

Since starting the skills database, Belk has been able to secure group discounts for on-site training. Belk, which annually spends an average of $2,000 to $2,500 per IT employee for training, has also caught the attention of training vendors looking to pitch their solutions to the company. “A lot of times now, some of the training companies will call me,” says Harris.

For Carol Bynum, second vice president of Protective Life Corp., in Birmingham, Ala., skills management is a fundamental business practice that all other parts of a company expect–and one that IT must adopt. Bynum joined Protective Life four years ago after a long tenure as a consultant in project management for Perot Systems and Arthur Anderson. “In Perot, we did fixed price deals. If I didn’t get my estimates right, I was out of a job,” says Bynum, who since August 1996 has been using a tool from PlanView Inc., of Austin, Texas, to manage the skills of about 250 IT employees at Protective.

The insurance company wasn’t doing a good job of planning projects, largely because IT didn’t have a complete picture of its staff, explains Bynum. After protests from the business units–which pay for all their IT services–a steering committee comprising senior IT and business executives was established to begin a skills inventory and project management overhaul.

With PlanView, the business units now have much more precision in controlling what they will spend on a given IT project. Customers can view all their projects online, for example, getting access to data such as what they have spent to date and detailed project plans, including information about the staff working the project.

Bynum hasn’t yet quantified how much money Protective Life has saved by developing its skills management program. But she’s in the process of conducting an internal study to find out, and she is extremely optimistic. “We have learned so much,” Bynum says. “It has had a serious impact on the bottom line and will continue to. Now [we] are getting the whole picture of everyone–who they are, what they do and what they do well.”

The Web Invites Trouble For Employees

awkrsAmerican workers are finding a whole new way to slack off: surfing for smut.

A study of 185 companies conducted between November 1996 and this month by consulting firm Digital Detective Services, of Vienna, Va., found that a quarter of the companies’ workers visited pornographic Web sites.

Media Metrix (formerly PC Meter), the top Web-traffic analysis company, reports that 19 percent of users at work visit smut sites (compared with 69 percent for news or information sites).

And a Nielsen Media Research study earlier this year claimed that staffers at IBM, AT&T Corp. and Apple Computer Inc. made 13,000 workplace visits to the Penthouse magazine Web site during a single month.

From these numbers, it appears that the “smut break” has replaced the coffee break as employees’ favorite way of letting off steam.

But human resource managers and other company executives tell a different story, getting decidedly nervous when asked whether workers at their companies are getting sidetracked by visits to off-color Web sites.

A random survey of large and small companies outside the information technology industries revealed that few have policies against improper use of the Web by employees. All said they have not had to discipline employees for workday porn surfing. And most would not allow their names to be used.

One thing is clear, however: Officials at some companies that use the Web as an integral part of their business said they fear workers getting sidetracked by personal surfing less than companies that are new to the Web.

Douglas Rice, president of Internet-based advertising agency InterActive8 Inc., in New York, said it was hard to imagine a worker in his company’s very open offices spending much time on a porn site.

“We have such an open environment here. There are hardly any separate offices,” with many staffers in a large, open room, Rice said. “I think people stay away from the porn sites as much out of fear of ridicule as anything else.”

What’s more, the company’s 30 employees are savvy enough to realize that displaying porno-graphic images on their monitors could be construed as sexual harassment and could put them on the receiving end of a lawsuit, he added.

Other executives interviewed expressed doubts that they will ever be faced with having to discipline a worker for X-rated work habits. This response, from a senior executive at a New York-based management consulting company, was typical: “We don’t believe we have a problem with that here, though if we did, we’d obviously take action to correct it.”

The consultancy has no policy against improper surfing because company officials don’t believe the problem is ever likely to arise, he said.

An official at a Boston-based houseware products manufacturer said that since his company is still in the process of moving workers to the Web, the company has no workday surfing policy, although the idea hasn’t been ruled out. There isn’t a big concern about the conduct of the 50 or so workers who are now, or soon will be, on the Web, the official said.

A big problem?

These responses struck one Internet consultant as curious.

“This is a big problem, in spite of what some companies will tell you,” said David Yip, vice president of interactive services at consulting company Marknet Communications Corp., in Boston.

Yip, who has spent three years at Marknet helping companies get on the Web and build online storefronts, said he’s seen some eye-popping things on workers’ computer screens at some of his clients’ sites.

“In some of these places, people spend their entire lunch hour on the Playboy site,” he said, especially in companies with comparatively few female employees.

Workers are always amazed to find out how easy it is for their surfing habits to be tracked by company management, Yip said.

“Many people don’t realize that companies can watch everything employees do. On the Web, everything is traceable,” he said.

Marknet recommends to its clients that they put a surfing policy in writing to avoid potential legal problems if an employee ever has to be disciplined or terminated for improper Web use, Yip said.

Some companies might even want to consider taking measures to block workers’ access to certain Web sites, while putting in place technology that records the URLs that workers try to access–even if they’re on the “access denied” list, he said.

Related article: New Software Helps Combat Porn

The swelling number of users surfing the Web for pornography in corporate America is sparking a new market for products that let employers make sure their employees are productive during work hours.

“It’s a productivity issue,” said Bish Turlej, marketing manager for Tinwald Networking Technologies Inc., of Mississauga, Ontario, which makes monitoring software. “Managers need to know where their employees are going on the Web.”

Tinwald’s Internet SnapShot allows corporations to monitor how their Internet connections are being used, letting them know exactly where employees are clicking.

According to Digital Detective Services, of Vienna, Va., quite a few users are going to adult Web sites. The agency, which works with Washington-area law firms and corporations, revealed recently that one in four corporate computers contains some form of pornographic materials, including some instances of child pornography.

For Internet blocking and monitoring software makers, this means business. In 1996, only 1 percent of the approximately 94,000 companies with a direct connection to the Internet actively censored specific Web sites, according to Giga Information Group Inc., in Cambridge, Mass.

Giga expects that number to increase to 23 percent of more than 2.3 million companies by 2000. Company officials warned that blocking sites and monitoring employees is not a stand-alone solution.

“Creating and communicating an Internet access policy should be an essential component of [a company’s acceptable-use] plan,” wrote Ira Machefsky, an analyst at Giga, in a report.

Software makers, however, are coming up with solutions. One is monitoring usage with programs such as Tinwald’s. However, in a cyberworld sensitive to privacy issues, monitoring has been labeled a “Big Brother” technology.

“Corporations have to articulate these issues to their employees,” Turlej said. “Most of the time, though, the question is moot. A lot of times, companies can’t monitor their employees because it takes too much time.”

Yet many times, the software’s revelations come as a surprise.

“Most companies do not have a clue as to who is using the Internet and where they are going,” said Bob Perez, director of product management for Internet administration software maker Sequel Technology LLC, of Bellevue, Wash.

Once unacceptable sites are identified, managers can use software such as Sequel’s Net Access Manager to block employees from accessing them.

“Once a policy is defined,” said Perez, “blocking is a good way to enforce it.” By all accounts, a solid acceptable-use policy is essential before using such software in the workplace.

For Sequel, this is part of its business. “We are not in the business of blocking or being Big Brother,” explained Perez. “We help our clients enable whatever policy they decide upon.”

Policing the Internet: A template for corporations

The company has software and systems in place that can monitor and record all Internet usage. Our security systems are capable of recording (for each and every user) each World Wide Web site visit, each chat, newsgroup or E-mail message, and each file transfer into and out of our internal networks, and we reserve the right to do so at any time.

We reserve the right to inspect any and all files stored in private areas of our network in order to assure compliance with policy.

The display of any kind of sexually explicit image or document on any company system is a violation of our policy on sexual harassment.

The company uses independently supplied software and data to identify inappropriate or sexually explicit Internet sites. We may block access from within our networks to all such sites that we know of.

This company’s Internet facilities and computing resources must not be used knowingly to violate the laws and regulations of the United States or any other nation, or the laws and regulations of any state, city, province or other local jurisdiction in any material way.

Any software or files downloaded via the Internet into the company network become the property of the company.

Sun’s Strategies Were Compelling, But Sadly Ineffective

sunsSun has been riding the Java wave, but will it come crashing down on its hardware business?

Java has been a blessing for Sun Microsystems Inc. But for Sun’s revenue-driving hardware business, could Java turn out to be a curse?

More than any other company, Sun is tying its future–in the form of its successful RISC-based hardware business–to Java. While other RISC vendors are hedging their bets by embracing Windows NT and pledging support for Intel Corp.’s IA-64 architecture, Sun remains steadfast in its opposition to the Wintel camp.

With Java still unproven in the enterprise, that’s a risky bet. But it’s one that Sun is willing to make.

“The whole strategy behind Java is to create a level playing field; the master plan is to have a chance to compete,” said John Loiacono, director of strategy and branding at Sun Microsystems Computer Co., in Mountain View, Calif.

Such competition creates a double-edged sword for Sun. If Java does take hold as a viable, platform-independent environment for enterprise sites, it could negate Sun’s claims that its Unix-based SPARC processors provide unique advantages over Wintel systems.

“Java penetrates the corporate strategy on all levels,” said Jacek Myczkowski, vice president of development at Thinking Machines Corp., a Sun shop in Bedford, Mass. “But the verdict’s still way out if they can parlay that up the [product] food chain.”

A door opener?

Sun claims that the emergence of Java has led directly to new hardware sales. As evidence, officials point to the double-digit growth in server revenues since Sun started beating the Java drum in 1995.

“Java is an incredible door opener,” said Ed Zander, SMCC’s president. “A lot of times we walk in with a Java story and walk out with a data warehouse sale.”

That’s why Sun is positioning its servers and workstations as the ultimate Java platforms.

“We have designed the best Java thin-client servers on the planet,” said Bud Tribble, vice president and chief architect of Java systems at SMCC. “And if you want to run Java on the server, we have a more scalable implementation than anyone.”

To fuel workstation sales, which have flattened out in the face of NT’s encroachment into the workstation arena, Sun will promote its SPARCstations and Ultra workstations as the supreme platforms for developing Java applications.

Specifically, Sun is adding JIT (just-in-time) compilers and other software and hardware components to improve Java performance. In this regard, Sun’s not alone. Intel and IBM, for example, are working on Java compilers and accelerators for their respective Pentium and PowerPC processors.

Sun will also better compete on price, with plans to introduce a new line of SPARC-based “power desktops,” officials said.

Sun’s goal is to keep NT from eroding its dominant share of the Unix-based workstation market. The company had 41.5 percent of Unix workstations shipped worldwide in 1996, according to International Data Corp., in Framingham, Mass.

For the traditional desktop, Sun is turning its attention to thin clients in the form of its JavaStation.

“They can get away without NT on the server, but they need to have an alternative on the desktop,” said Tony Iams, an analyst at D.H. Brown Associates Inc., in Port Chester, N.Y. “That’s where the JavaStation comes in.”

Still not available

But JavaStation, announced in October 1996, still is not commercially available (Sun officials expect the systems to ship by year’s end).

In addition, full-blown business applications for Java-based thin clients, such as Lotus Development Corp.’s eSuite and Star Division’s StarOffice 4.0, are just beginning to trickle out.

On the back end, Sun will bundle Java-based Enterprise Storage management software with virtually all of its servers, enabling them to be controlled from any networked Java client.

For customers, the choice comes down to basic issues: uptime, performance and scalability.

“Executions [on Solaris servers] have been reduced in some instances to milliseconds, load balancing is tremendous, and they run for weeks or months before I have to reboot,” said Robert Gahl, chief information officer at Sphere Information Services Inc., a Web services provider in San Jose, Calif., that has been developing on Solaris and other platforms.

But in its eagerness to take the lead in performance, Sun may be overstating its systems’ capabilities. Last week, a benchmark developer poked holes in Sun’s claims that its hardware provided 50 percent better performance over Windows NT systems running Java applications.

Pendragon Software Corp., which created the CaffeineMark Java benchmark, said Sun tweaked its JIT compiler to recognize Pendragon’s benchmark test, rendering an abnormally high score.

“We have no reason to believe this was a master plan from Sun management, but the compiler was tweaked to look for our benchmark, and that doesn’t happen accidentally,” said Ivan Phillips, president of Pendragon, in Libertyville, Ill.

Consequently, Phillips said the best Java performance Pendragon has tested is a system running NT and 300MHz Pentium II chips.

Sun admitted tweaking the compiler, but defended the action.

“Our job is to provide the best performance on the benchmarks that are available, and we did that,” said Brian Croll, director of product marketing for Solaris. “But if it’s not representative of real-world Java applications, then that’s a problem with the benchmark.”

What Sun seems to be ignoring in the real world is that many traditional Unix sites are integrating Windows NT into their networks.

“We use Sun hardware because it supports very large databases and applications, although we are running NT in parallel for desktop applications,” said Chris York, technology manager at Chase Manhattan Bank, in New York, which is using Java to build large platform-independent financial applications.

‘A line in the sand’

It’s these sites that could present a problem for Sun down the road. Sun’s main Unix rivals–Hewlett-Packard Co., IBM, Digital Equipment Corp. and NCR Corp.–all have dual strategies for supporting both Unix and NT. Sun, competitors claim, has painted itself into a corner.

“Sun has drawn a line in the sand and said, ‘NT over our dead body,'” said Richard Belluzzo, executive vice president of HP’s Computer Organization, in Palo Alto, Calif. “We don’t think that’s smart for business or smart for the customer.”

“Without Java, Sun would have been eclipsed a long time ago,” said Ihab Abu-Hakima, vice president and general manager of the enterprise systems division at Silicon Graphics Inc., in Mountain View. “We’re taking direction from our customers, who are implementing a dual-platform strategy.”

Sun does seem to be softening its anti-Intel stance a bit: It has worked out a deal with NCR to run a future IA-64 version of Solaris on NCR servers. But executives scoff at claims that Sun is ignoring the issues its customers think are important.

“Yeah, NT is daunting, but customers want solutions that solve business problems,” Zander said. “We didn’t flinch when competitors [announced] Wintel deals, because the only people who make money selling Wintel are Win and Tel.”

Additional reporting by Rob O’Regan

Related article: Server-Side Java VMs Deemed a ‘Level Playing Field’

Sun Microsystems Inc. may assert that Java applications run faster on Solaris, but the company can’t claim any home-server advantage when it comes to compatibility. IT professionals and software developers say they have encountered few problems running applications across different platforms’ Java virtual machines.

“I don’t feel there’s an issue mixing and matching servers,” said Eric OKunewick, vice president and manager of enterprise architecture for Key Services Corp., in Cleveland.

“It’s really a question of how well the Java virtual machine complies with Java specifications,” said OKunewick, who is using Sun’s Java Virtual Machine on Windows NT servers and IBM’s CICS Java Gateway on an IBM MVS mainframe.

Mark Kerbel, president of Screaming Solutions Ventures Inc., in Toronto, said he has a lot of confidence developing in Java for both Solaris and Windows NT, and has also done some work for IBM’s AIX. “There’s definitely a level playing field,” said Kerbel. “We feel very comfortable developing applications on one server that can be thrown onto another.”

Recently, Screaming Solutions built three Java server applications for a financial services provider in Canada that let customers gather personal financial data on a Web site. The applications run on three Solaris servers, but were developed on Pentium Pro PCs running NT, said Kerbel. “We used the exact same code,” he said.

Officials at Blue Lobster Software Inc., in Rochester, N.Y., also do most of their initial development work on NT, said Michael Hickman, vice president of technology. Blue Lobster has just finished testing for the next version of its Mako product, which links CICS transactions on the mainframe and Java client or server applications.

Simon Arnison, chief technology officer at Innotech Multimedia Corp., in Toronto, said he’s only experienced “teething problems” with different versions of the VM, mainly because of a lack of availability for BSD Unix, the Macintosh or Linux until recently. Innotech’s product, NetResults, is a Java-based text search and retrieval engine launched last spring on a variety of platforms.

“It’s been our experience, in having developed with 100% Pure Java, that the vast majority of implementations of the Java machine on the vast majority of platforms have been good,” Arnison said. That wasn’t the case one year ago, when he experienced everything from memory and date class errors to I/O errors and segmentation violations.

“We no longer write any of our products in native code; we’ve bet the farm on Java,” he said.

However, there’s been a tradeoff writing in Java: Speed has deteriorated. “Our products, compared [with] native C++ compiled applications, run anywhere from four to 10 times slower,” said Arnison. “It is with some anxiety and trepidation we’re awaiting the arrival of the new Hot Spot technologies” from JavaSoft for real-time profiling, a performance booster within the VM that turbo-charges application performance, he said.