As a remote database services provider, the initial client meeting is perhaps the most critical factor in establishing a successful relationship.
Some events lead potential clients to contact you for services or price quotes. These events may be data loss, database performance problems, unexpected data growth, staff loss, database project work or a combination of these issues, which will be communicated to you during the initial phone call or email.
Although you will be providing remote database services, a site visit is necessary as it:
- Increases the conversion ratio.
- Allows you to understand the nature of the client's business.
- Allows you to educate the client as to what services you will provide and the limits of these services.
- Provides you with access to the network to do an initial discovery and investigation.
Farming out database services to a third party is not an easy decision for many companies. Factual or not, outsourcing has become synonymous with poor quality. An initial visit is highly valuable in dispelling a prospective client's concerns; a face-to-face meeting personalizes your services and increases customer comfort level.
The site visit also provides your sales and technical team with insights into the nature of the client's business, allowing them to package services. In some cases you may be able to determine a need for additional services, like custom development or security services, or determine workflow and offer suggestions to streamline or align processes with remote database services. Consider a case for a fulfillment center that experiences peak loads: Rather than rearchitect the database system to handle these peak loads, a more appropriate and less costly option may incorporate a middle-tier queuing solution or Service Broker to offset them.
The following tip outlines my own best practices for a successful initial client meeting and ongoing relationship.
|Setting database services expectations||Return to Table of Contents|
Clients often have unrealistic expectations of remote database service offerings: They'll require no downtime, 99.999 availability, subsecond response times during high load, etc. While these goals may be achievable, they may not be practical or within the client's budget. It is important to negotiate SLAs (service level agreements) with the client as to the limits of the services they will provide.
For example, downtime can be decreased with tested recovery plans; but as databases get large -- even using differential backups with third-party compression backup software and point-in-time recovery -- downtime during recovery can be lengthy. If this downtime is unacceptable to the client, it is important that you discuss standby options, outlining the pros and cons of each option, and the associated costs and exposure to data loss. It is also important to have a sliding scale for response times and clearly defined scopes as to emergency, holiday and weekend service.
Nothing is more important than the initial survey and discovery process. While the client may have a very detailed inventory of their RDBMS and the environment, you must collect additional information to bring the client under your management.
|Surveying a database environment||Return to Table of Contents|
When we bring a client under our management our initial survey includes the following:
- Database environment details
- Physical server characteristics
- Benchmarking and change management
- Physical server location
- Database monitoring
Database environment details
In a SQL Server environment, you must know the SQL Server version, service pack, hot fixes applied, operating system it is running on, and service pack and patch level. A scripted discovery process will enumerate SQL Server databases that are not patched to the latest level and allow you to tailor your tools for the particular version of SQL Server the client is running. For example, SQL Server 2005 management and monitoring tools are very different from SQL Server 2000 or SQL Server 7. My company also checks for database compatibility level. SQL Server databases upgraded from SQL Server 7 or 2000 to 2005 will remain at SQL Server 2000 compatibility level. As there are some behavioral changes in this level we draw the client's attention to this discrepancy.
You may run database security checks. Inform the client that you will be doing these checks as part of your initial investigation and have them sign off. These security checks should include checks for blank passwords, passwords that match the account name, and weak or easily breakable passwords. Should the client request, you may crack the sa password (or the passwords for other accounts), as many clients do not know their sa password due to negligence or lack of knowledge transfer between staff or former staff. This can be important if there are client applications with hard-coded passwords in compiled code where the source has been lost. You may verify that applications accounts are running under least privileges (to limit the impact of any SQL injection attacks). My company constantly upgrades our discovery and inventory scripts as new security breaches are discovered to ensure that our clients' environments are protected as quickly as possible.
The details collected through the initial discovery process will also highlight current or looming problem areas: irregular or unverified backups, no point-in-time recovery, auto growth in percentages, dwindling disk space, etc.
Physical server characteristics
Collect details about server physical characteristics. With WMI and other tools you can get details about the number of processors, processor types, hard drives, RAM, etc. This can prove invaluable during performance analysis and benchmarking, which often requires knowledge of hardware specifics. For example, disk queue length should be less than two times the number of hard drives on an array. Should a server have a high value disk queue length, to correctly interpret it and make hardware recommendations you must know the number of hard drives in the array. Knowledge of physical hardware characteristics can also help you advise the client on whether the server is under or over utilized, and you can then spec out and purchase hardware to provide optimal service as they grow.
Benchmarking and change management
During the initial site visit it is important to benchmark all SQL Server databases. This will provide a base to measure increasing system response or be proactive should the system's response decline, in which case you must determine the source of the issue and address it before it's problematic for the client. Performance tends to change in response to application or database code changes. With well-defined change management procedures, such performance fluctuations can be quickly attributed to code changes or other factors. Without a clear understanding of what has changed in the environment, at best you are guessing as to what is the root cause of performance degradation or improvement.
Change management also implies code review. This can be a negotiated part of your VAR services. We have written scripts that will check the code for best practices and these scripts score the code according to problem areas. Code which achieves high problem scores are reviewed by our performance team.
Physical server location
With the comprehensive discovery scripts your knowledge of the clients' environment will most likely be many times better than theirs – and they will rely on you for it. For a complete inventory, require details like physical location should you ever need to service the machine or dispatch a hardware tech.
Bringing your clients under a consistent structured environment allows you to scale your services. Correctly done, remote database services will bring SQL Servers under your management so your client can connect to a SQL Server database and find a familiar layout. Time spent learning the particular environment details and gotchas is kept to a minimum as all the environments are similar after you have brought them under your structured management.
Nothing is more important at this point than proactive monitoring. Reactive monitoring quickly leads to staff burnout and turnover, and eventually client dissatisfaction. Install monitoring tools that will alert you to any performance counters changing beyond certain thresholds, or when any jobs fail. Our staff provides 24x7 coverage to ensure that any performance degradation or job failures are addressed quickly within the SLA negotiated.
Our job as a VAR is client retention. While this means an ongoing and expanding revenue stream for us, it ultimately means our clients are happy with our services and more important to them, our services are cost effective. We have found that the initial client meeting and a comprehensive discovery process, bringing them under structured management and doing proactive monitoring, allows us to provide our clients with high quality services and scale our remote database services.
About the author: Hilary Cotter has been involved in IT for more than 20 years as a Web and database consultant. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and subsequently studied economics at the University of Calgary and computer science at UC Berkeley. He is the author of a book on SQL Server transactional replication and is currently working on books on merge replication and Microsoft search technologies. Ask Hilary your remote database services questions.