CMG 2006 in Reno was a huge success. Phil and Dom each presented a paper. Meeting with some of the brightest people in the performance and capacity planning area not only helps us to augment our own network, but also gives us new ideas on how to get customers better focused on the importance of having a sound design for any IT project that they may be involved with.
Its now official!
We have 2 papers accepted in this years CMG International conference in Reno Nevada:
Fortuitous Technologies today announced new sizing and tuning services for Linux and UniX servers. Both services are low cost alternatives that cater to the burgeoning Linux server marketplace. The sizing services provide cpu, memory,IO, and network interconnect recommendations in raw performance units.
The Tuning services provides kernel recommendations for Linux and other commercial Unix variants. The services provides kernel tuning parameters for range of operating systems and applications like MySQL, PostgreSQL,Apache, Asterisk, and Oracle.
With these new services, Fortuitous can now provide a broad range of performance, consulting, and design services for Linux, Unix, FreeBSD, and related technologies. The new tuning and sizing services add entry level capabilities for open source solutions such as Linux, Apache, MySQL, PostgreSQL,PHP, Postfix, and Asterisk.
Both services will start at $499 initially. Fortuitous anticipates similar service announcements for small cluster and load-balanced systems.More information can be found at http://Fortuitous.com.
Summary: Costs of performance and capacity planning are often recovered many fold and fast.
Often seen as an extra expense, performance and capacity planning often saves a project more money in the long run. Costs are usually recovered by the completion of the initial implementation phase if not sooner. Moreover, projects that are properly planned will achieve design goals and allow future scalability at a significantly lower total cost.
Performance Planning Issues
In today’s parallel, heterogeneous, and interconnected IT wilderness, predicting and controlling cost factors surrounding systems performance and capacity planning is overwhelming at best. For larger IT projects, it is not uncommon to find situations where the cost factors for performance tuning and capacity problems reflect the largest and the least controlled expenses. To illustrate, a sudden slowdown of an enterprise wide application may trigger user complaints, delayed projects, an IT support backlog, and ultimately a financial loss to the organization. By the time the performance problem is located, analyzed, worked around, tested, and verified, an organization may have spent tens of thousands of dollars in time, IT resources, and hardware, only to fall back into the same vicious cycle the very next year.
When performance is designed into the final solution, costs can be contained and reduced while ensuring required performance with scalability potential. This approach shifts the emphasis away from the installation and setup phase to the planning and design stages. It is paramount that IT not only understand the expected workload behavior, but responsibly act by conducting feasibility and design studies prior to spending many thousand of dollars on a solution that in a best case scenario, may not be optimal, and in a worst case scenario, completely fails.
Hidden Costs of Poor Planning
- Unneeded Hardware
Application performance issues have an immediate impact on customer satisfaction and an organization’s bottom line. It is not uncommon that while a performance issue surfaces, organizations start adding more (often expensive) hardware into the operation mix, without fully understanding where the problem truly lies nor understanding how the extra hardware will affect overall system performance. Hence, working on the symptoms and not the underlying cause may provide an organization with some relieve in the short run, but intensifies the issues in the long run, as even more hardware has to be troubleshot and analyzed. In addition, there are these costs associated with redundant hardware:
- Extra Cooling (several times the electricity costs)
- Extra IT Overhead (See Below)
- Hardware Replacement Costs (drives, fans, psu, et al)
- IT Overhead
In addition to hardware costs, the IT personnel costs associated with unplanned performance tuning exercises can be excruciating. IT managers may be forced to commit hundreds of man-hours to solve even simpler performance problems. As in some circumstances, the actual source of the problem may not be easily identified, IT personnel may spend hours or days analyzing and tuning the wrong subsystem. To make matters worse, some performance tuning exercises may require crossing over into the domains of security, reliability, or availability. Proper design and planning can reduce these costs.
- Security and HA
Without initial proper planning, fire-fighting scenarios such as these may result into additional work for an organization’s security or high-availability (HA) personnel as well. Proper design and planning can significantly reduce these costs as well.
- Lost Revenue
Without proper planning, projects run the risk of partial or total failure which can drive away associated revenue. There is no excuse for a project to fail from a lack of adequate planning and design. Even if the system is not designed for direct revenue stream, it can cause loss for internal customers and related systems.
As an example of the shortcomings of zealous use of hardware lets consider CompanyX, whose 10 node cluster would not perform well under stress. The managers authorized IT to buy 5 more servers to increase performance, which resulted in no noticeable performance gain. When the system was finally examined, a simple model immediately showed that the memory and IO subsystem were bottlenecked, and the optimal number of compute nodes was about 10.
In short, the proper approach to managing systems performance is to design performance into the solution. If the system is already in production, the recommendation is to conduct a performance study that covers application, operating system, and hardware subsystems, respectively. It is paramount to understand not only the actual workload behavior, but also the interaction between the application, the OS, and the hardware. Treating performance related issues early on in an IT project avoids hidden cost scenarios, and is exponentially cheaper than performing extraneous tuning after deployment.
Fortuitous Technologies offers vendor neutral design, feasibility, performance tuning, and capacity planning services for application, database, hardware, and operating systems. They can be found at http://fortuitous.com
See the full articleAbstractAbstractDominique A. Heger, Fortuitous Technologies, Austin, TX, dom(at)fortuitous.com
In today’s parallel, heterogeneous, and interconnected IT landscape, predicting and controlling the cost factors surrounding systems performance and capacity planning may seem overwhelming to many organizations. For larger IT projects, it is not uncommon to experience scenarios where the cost factors for performance tuning and capacity planning reflect the largest and the least controlled expenses. To illustrate, a sudden slowdown of an enterprise wide application may trigger user complaints, delayed projects, an IT support backlog, and ultimately a financial loss to the organization. By the time the performance problem is located, analyzed, worked around, tested, and verified, an organization may have spent tens of thousands of dollars in time, IT resources, and hardware, only to fall back into the same vicious cycle in the near future. The rather complex IT environment in most organizations today is normally considered the culprit to effectively tackle performance issues. The argument made here is that performance has to be designed into the final solution (hardware and software wise). This approach requires shifting the emphasis away from the installation and setup phase to the planning and design stages. It is paramount that organizations not only understand the expected workload behavior, but act upon accordingly by conducting feasibility and design studies prior to spending thousand of dollars on a solution that in a best case scenario, may not be optimal, and in a worst case scenario, just does not work.
See the full article Here
March 15, 2005
For Immediate Release
Fortuitous Technologies has recently announced the addition of strategic performance engineering services including performance planning, capacity planning, reliability and feasibility studies. The main focus on these services is the cluster, grid, and super computing markets.
Fortuitous Technologies was recently joined by Dr. Dominique Heger, who brings a vast set of talents in performance engineering, modeling, capacity planning and cluster computing to the company. Dr. Heger was a core performance engineer in many large cluster and SMP projects for IBM, including ASCI Purple and ASCI White.
According to Dr. Heger, “An emerging consensus in the performance community is that the unilateral peak performance-centric focus has become misdirected, and that issues such as reliability, availability, maintainability, security, and scalability have emerged as being more important. Applying Performance Engineering techniques early on in a projects allows to design reasonable performance targets into a solution.”
With its new performance engineering emphasis, Fortuitous enhances its position in the performance optimization, capacity planning, reliability, and high-availability markets. These services are widely recognized as critical ingredients to the financial, oil, scientific, and biotechnology computing markets.
Fortuitous provides performance engineering, training, and support services for the Linux, Unix, and high-performance computing world. For further information and enquiries, please contact:
Philip Carinhas, CEO
Fortuitous Technologies, Inc.
Fortuitous Technologies http://fortuitous.com Cluster and grid computing has become extremely popular, yet very few designers are using modern capacity planning techniques to ensure performance.
Cooking on the Grid
Grid computing has become extremely popular in IT circles mainly because of the potential computing power and cost savings. But grid computing is a multifaceted technology that means different things to different people. Some interpret grids to be a heterogeneous group of desktops and servers, which other see it as a group of cluster computers connected together over the internet. Sometimes grids are designed for raw CPU power, others are designed for raw I/O, while others are a combination of data and compute power. Many grids are often designed around a specific set of applications. As we’ll see later, this is why grid and cluster design is strongly tied to its intended purpose.
Yet despite the importance of strategic design of grids and clusters, very few commercial integrators spend the time or capital to ensure the system’s feasibility and performance. The reason why is many fold; lack of experience, cost issues, ego, or lack of skill resources. An important exception to this is the scientific community whose grids and clusters are often well designed.
A Stitch in Design Saves Nine
So what is really involved in cluster/grid design and planning? First of all, not every application is suited for a cluster and/or grid environment. Customers who switch from an SMP environment to clusters with the intension to someday get into the intra-grid domain are supposed to conduct a feasibility study prior to getting too deep into the cluster business.Second, the customer has to fully understand the current workload behavior, and has to be able to formulate the goals that have to be achieved in a cluster/grid environment. Modeling based sensitivity studies allow the customer to compare (from a relative perspective) design alternatives, and to zoom in on the setup that is most feasible for the environment. As a modeling based approach is recommended at this stage, no money has to be spend yet on any hardware components. In a nutshell, conducting a comprehensive feasibility and design study early on in any cluster/grid project safes the customer substantial money, and replaces the common guessing game with a very pragmatic approach to systems engineering that leads to stable environment with a high acceptance rate from the user community.
Grid/intra-grid/cluster planning and design studies are fundmental parts of the implementation process, having the greatest impact when designed into the final project, encompassing application (workload), network/interconnects, OS, I0, memory, and CPU subsystems, respectively. In almost all circumstances, companies can recoup the design costs in the long run, as fewer firefighting nightmares are necessary.
Fortuitous Technologies provides comprehensive performance, planning, and design services based on solid mathematical and statistical methods. They can be contacted at http://Fortuitous.com.
Fortuitous has entered into a partnership with GAX Corp to offer
Performance and Cluster Services for Linux and other UNIX systems.
December 10, 2005
16:00 GMT, 2005
For Immediate Release
Fortuitous Technologies of Austin, TX has recently signed an agreement with GAX S.A. of Luxembourg (http://www.gax.com). GAX will market and deliver key Fortuitous services and products in the European marketplace.
GAX provides comprehensive web interface design, web-based applications, and IT consultancy the to European banking and financial service markets. This agreement gives Fortuitous an excellent entry point into the European
markets for its Performance and Cluster services.
Fortuitous CEO Philip Carinhas stated “The capacity planning and performance marketplace in Europe is an vital market for performance services. Working with GAX will allow both companies to offer these services in Europe. This partnership will allow our expert services to be delivered through a well established European company, which can deliver services in a fast, familiar, and efficient way.”
Fortuitous provides comprehensive support and training solutions in the performance tuning, capacity planning, reliability, high-availability, and Unix server administration markets. Fortuitous also offers IT services and training in Clusters, High Availability, and Network Security, on a range of UNIX platforms such as Linux, AIX, Solaris, HPUX, and FreeBSD.
For further information about Fortuitous:
Fortuitous Technologies, Inc.
November 3, 2005
16:00 GMT, 2005
For Immediate Release
Fortuitous Technologies is expanding into the performance, capacity, reliability, and disaster recovery markets.
Fortuitous currently provides networking, documentation, and IT administration services to the Linux, FreeBSD, OSX, and UNIX marketplace.
With this expansion, Fortuitous enters the market of performance tuning, capacity planning, reliability, and high-availability.
These services are recognized as critical ingredients to the
E-Commerce, financial, and other high-performance computing markets.
“We recognize that there is a significant change in the way that companies
allocate and plan for new information resources,” says Fortuitous’
CEO Philip Carinhas.
“Now that the internet and E-Commerce are well established,
companies want to know how, when, and where to grow their services in a
scalable and reliable way. Once companies grow beyond their original
boundaries, planning services become critical to their current operations
and future expansion.”
Founded in 1999, Fortuitous Technologies, Inc. Fortuitous”) is a leading provider of enterprise technology and IT services including key networking, administration, planning, and performance serivces.
More information can be found at http://fortuitous.com.
This is the Fortuitous News page. Please see the
http://fortuitous.com for more information.