Sales and profits are plummeting and customers are demanding better deals. What can you do to silence customer complaints, cover fixed costs, and buy time until the economy rebounds? Often, companies will cut prices. But is this knee-jerk reaction the best strategy for pricing your products in a downturn? Definitely no it may affect profitability when the economy rebounds, signal to your customers that you're easy prey for additional discounts, and cloud your brand's hard-earned image. Learn how to craft your pricing strategies to strengthen your business now and to help prime your business for later growth.
Before you think about adjusting prices, think again, a knee-jerk reaction to the recession is never good for business in the long run, and could even erode your brand image. Instead, make your pricing decisions based on clear strategic goals.
When times are good, pricing mistakes can be easily forgiven. But when the economy sours, a misguided pricing strategy can shrink profitability, warp customer relationships, and destroy a brand.
When sales and profits are plummeting and customers are demanding better deals, the instinctive response is to cut prices. This silences customer complaints, helps cover fixed costs, and buys time until the economy rebounds. A price cut can also boost sales quickly, especially when there is no money for advertising or other promotions.
But such a knee-jerk reaction may not be the best strategy; Price cuts now may affect your company's profitability when the upturn occurs. It may signal to customers that you're an easy prey for additional discounting. And it may cloud your brand's hard-won image.
Pricing decisions should not be viewed as Band-Aid solutions for bleeding income statements, they should be part of a long-term strategy for fiscal fitness. When economic storm clouds gather, trim your production levels, postpone expansion plans that aren't absolutely vital to your future growth, and slash nonessential costs wherever you can. This prepares you to pursue low-value business opportunities that help you maintain your cash flow without drastically reducing your production capacity.
Crafting the right pricing strategies will not only strengthen your business now, it will also prime it for growth later. To bolster sales while avoiding a price cut's dampening effect on long-term profitability, keep the following advice in mind:
Consider the impact
Profitability is not the only prism through which you should view pricing. Other important perspectives include:
•Volume. Too many firms fail to account for the effects of price on volume and of volume on costs. In a recession, trying to recover these costs through a price increase can be fatal.
•Impact on customer relationships. "Sucker pricing" This creates ill will and tarnishes your brand.
•Impact on the industry. Price cuts not backed by cost reductions often lead to competitive counterattacks, which erode profitability.
Adjust your sales goals
"Don't fight today's sales wars with yesterday's pricing strategies," says Mitchell. Sales goals set when checkbooks were open may no longer be suitable for a recession. Executives experience what Holden calls the "coffin corner of costing" when, for the purposes of making the numbers, they overemphasize capacity utilization and become willing to cut the price of high-value products. The wireless industry, for example, generated strong demand with its low pricing but then was unable to recover its costs of capital.
Instead of sales goals, set dollar contribution goals for products, market segments, and individual customers. To do this you may have to invest in financial systems that can track process costs as well as direct costs. Moreover, setting profitability goals may mean abandoning market-share goals. After all, a large market share doesn't necessarily mean increased profitability. But switching to profitability benchmarks can help you pursue other low-price business.
It may also make sense to change the basis for your pricing. Most expert believe that pricing based on value the economic or psychological benefits delivered by your product or service is much more effective than competitor, cost, or customer-driven pricing strategies. Remember, too, that the basis for customer value can shift when the economic climate changes. When times are good, customers often place a premium on your maintaining production capacity to ensure timely delivery of their orders; otherwise, their sales suffer. But in a recession, logistical services may be more valuable.
Understand your competitive advantage
In a recession, pricing should be shaped by industry position and long-term strategy. If your competitive advantage derives from a low-cost structure, cost cutting can pump up your market share, positioning your firm for a payoff when the economy improves. But a common mistake, says Holden, "is to use price as a competitive advantage for high-value products by giving away services or discounting to your best customers. You erode the base of profitable customers and reduce the potential for profitability when the recession ends."
Leverage your segmentation strategy
Especially if you have high fixed costs, use pricing to generate incremental revenue from your segmented customer base. Strive for "first-class," "business-class," and "economy" pricing, the way the airlines do. First-class customers receive extra value with minimal discounting; economy customers get minimum value. Such segmentation based on price sensitivity creates sales opportunities that can offset losses in other areas, especially since there is often little difference in production costs among the offerings.
Offerings can be segmented not only by value added but also by time (for example, peak-load purchasing), location, or purchase quantity. "The more you can slice and dice your prices and offerings without affecting your brand, the more you can sustain profitability," Dynamic pricing represents an extension of such a segmented pricing strategy; here, prices shift instantaneously in response to changes in supply and demand. Although the practice doesn't suit every company, early testers of dynamic pricing software have been pleasantly surprised to discover how much more they can charge without affecting sales volume. The consulting firm Accenture reports that a price increase of just 5% can improve operating profits by 55% if sales volume remains constant.
Pamper loyal customers
Losing a customer now represents a double whammy: It drains customer equity and raises the cost of acquiring a replacement. Keep your best customers happy by bolstering loyalty programs or providing additional services. Consider offering product training or other classes for your B2B customers—not only will it augment the value you offer customers, it will also make it more difficult for those customers to switch to another provider.
Plug revenue leaks
Companies can run aground on pricing gaffes once covered by the high tide of a good economy. A common oversight is not recovering all the costs involved in services, delivery, or other processes, says Mitchell. Set minimum order quantities so that processing costs won't eat all the profits. Strengthen your collection efforts to shrink the time between orders and receipt of payment. Without undermining customer value, establish a price menu for "free" services such as delivery or favorable payment terms. When sold separately, such offerings increase revenue opportunities. They also provide a benchmark value for customers who formerly discounted them because they were free.
In a recession, revenue leaks also occur because sales forces become less resistant to customer pressures. They knock down the price until the sale is won, despite the impact on profitability. Ideally, prices should be negotiated based on business rules volume, delivery, financing and not according to the negotiating skills of purchasing agents. They should also be based on the value to the customer. But sales forces often oppose value pricing because it usually means higher prices and a greater willingness to walk away from price-sensitive deals. To encourage the desired behavior, compensate your sales force based on its contribution to profitability and/or customer equity, not just on sales volume.
Shift the battleground
When you negotiate with customers, include other factors besides the payment amount—for example, payment terms or ongoing training in the conversation. Some additional suggestions:
•Change the volume requirement to raise revenue and lower unit costs.
•Bundle products that increase customer value.
•In exchange for a discount, ask for a multiyear contract to smooth out your revenue and production variability.
Protect your brands
Brands become more valuable during a recession period because they offer defensible margins. Sales of cosmetics often rise during a recession, The reason: They represent affordable luxuries or offer a psychological boost. So don't cut prices on your premium brands during a recession; they can be sold without discounts through word-of-mouth or channel promotions that increase visibility and appeal.
The term marketing has changed and evolved over a period of time, today marketing is based around providing continual benefits to the customer, these benefits will be provided and a transactional exchange will take place. The Chartered Institute of Marketing define marketing as 'The management process responsible for identifying, anticipating and satisfying customer requirements profitably'. Join me as we take a look at the modern approach to Marketing Management.
Friday, December 25, 2009
Wednesday, December 9, 2009
Getting The Most Out Of Your Price
When you are setting up a business for the first time, it can be quite difficult to know what price to set. You need to think about what your customers are willing to pay what the competition is charging and your costs.
As a rule of thumb, it is best to aim high as you can, as you should always lower your price rather than increase it. If your price is low, you may attract a lot of customers but you may lose them if you need to increase the price. You may also be losing money because your price does not cover your costs. Customers may think that if your product is cheap, it can't be good. Customers would be happy to pay for quality - value for money is often the best option.
When comparing your product to competitive products, you do not necessarily have to follow their pricing. Ask yourself if your product or service compares favourably and whether you can justify a higher price.
When considering your costs, it would be helpful if you did a forecast. This will allow you to be more informed and realistic about the price you charge. Bear in mind that you will need to charge a fair and competitive price with a reasonable profit. Your gross profit will need to cover all overheads and expenses. You take your money (your drawings) from what is left, that is your net profit.
Costing Formulae
There are three main costing formulae you can use to work out a cost for your service or cost for your product.
1 Daily/Hourly Rate
If you are providing a service, for example consultancy, you may find it useful to cost your service based on time calculation. You do this by first adding the number of days you will not be providing a service in the year. This will include weekends off, holidays, bank holidays, administration (about a day a week) and contingency days for any emergencies. Subtract this figure from the number of days in the year and this will give you your potential earning days. Then work out how many hours each day you will work and this will give you your total potential hours for the year.
To calculate your daily/hourly rate the formula is:
Business Overheads (fixed costs pa) + PSB (Personal Survival Budget - money taken out business to live on) divided by Number of days/hours available to sell = Cost per day/hour.
2 Cost of Product
The formula below applies if you are making something that you are going to sell.
Business Overheads + PSB divided by Production (the total number of items you produce) plus the Variable Cost per item (the variable costs per item will be know when you start making them) = Total Cost per item.
3 Mark up and margin
You business can only exist if you make a profit. The profit can either be as a percentage of the cost price or the selling price. If the profit is based on the cost price, it is known as Mark Up and can be expressed as follows:
Selling Price - Cost Price
--------------------------------------- x 100
Cost Price
If based on the selling price it is known as Margin and can be expressed as follow:
Selling Price - Cost Price
--------------------------------------- x 100
Selling Price
When pricing your product, make sure that the selling price provides an adequate margin to produce a profit.
There is no definitive method of setting a price. Your aim should be to set your prices initially at the level, which gives you your highest profits possible. Easier said than done!
Tips To Know When Setting Your Prices
Determining prices must be based on a broad, thoughtful basis. It requires a basic understanding of both your financial and business goals. Below are few principles to consider when you decide what prices to put on your product or service.
Keep your prices realistic. A realistic price is the price you set after taking into consideration various factors: the direction of your business, your cost structure and expenses, your resources and financial goals. Avoid setting your prices based on “what everybody is charging.” What is right for your competitors may not be profitable for your business. After all, their goals, strategies and financials may be different from yours. Research your competition and see what they are charging, but do not copy their pricing structure just to charge what everybody else is charging. Set your prices based on your own situation.
Cover all your costs. The price of your item should cover the costs associated with it, its contribution to the overhead, and profit. A successful pricing strategy is one that results in the most dollars after all your costs are met. Be careful in setting your prices too low: while it may attract a large sales volume, you may not be making enough revenue to cover the costs of selling the merchandise. If you set your prices too high, your sales volume may be so low you can't cover operating expenses.
Check your prices against inflation. Your prices must keep up with inflation. Inflation increases your cost of doing business, with the prices of your materials, overhead and other costs increasing. If you maintain your prices despite rising inflation, you will erode your profit margin. Allow your business to increase your prices at least once a year, but give your customers sufficient warning about the price increase. Once you’ve established your policies, constantly monitor your prices and operating costs to insure profit.
Include in your pricing the value of your time. Avoid committing the mistake of not including a salary for yourself, particularly if you are operating a service business. Your time is valuable, and you need to compute it in your pricing structure.
Customers are not always looking for the lowest price. Price is not always the topmost concern of customers. There are many customers who do not mind paying higher prices, particularly if they know that they are purchasing exclusive merchandise, or your business is located in a convenient or high-end location. Many customers are willing to pay premium prices for quality service: speedy delivery; helpful and friendly customer relations; excellent product knowledge, or satisfaction in handling complaints.
Price low, but smart. A common pricing strategy for small business. particularly new entrants into the market is to price low just to get the work. By pricing low, the aim is to penetrate the market and get as much repeat business.
However, be aware that pricing low can have adverse repercussions on your business. First, a low price may signal a low quality product and service. Be careful in setting prices too low. Second, it may be difficult to raise prices later on once customers are accustomed to your low prices. Third, your start-up business is yet to develop economies of scale that makes it hard to compete on price.
Use discounts with care. Offering discounts is a good strategy for encouraging repeat/bulk orders, bundling sales, and early payment of customers. Discounts also allow you to more quickly sell products with vanishing opportunity -- e.g. products with sell-by dates, seasonal and quick obsolescence like fashion and technology. You can also stimulate demand for your products during the times when your product/service is less popular. Discounts are also used to clear out merchandise that has become outdated.
As a rule of thumb, it is best to aim high as you can, as you should always lower your price rather than increase it. If your price is low, you may attract a lot of customers but you may lose them if you need to increase the price. You may also be losing money because your price does not cover your costs. Customers may think that if your product is cheap, it can't be good. Customers would be happy to pay for quality - value for money is often the best option.
When comparing your product to competitive products, you do not necessarily have to follow their pricing. Ask yourself if your product or service compares favourably and whether you can justify a higher price.
When considering your costs, it would be helpful if you did a forecast. This will allow you to be more informed and realistic about the price you charge. Bear in mind that you will need to charge a fair and competitive price with a reasonable profit. Your gross profit will need to cover all overheads and expenses. You take your money (your drawings) from what is left, that is your net profit.
Costing Formulae
There are three main costing formulae you can use to work out a cost for your service or cost for your product.
1 Daily/Hourly Rate
If you are providing a service, for example consultancy, you may find it useful to cost your service based on time calculation. You do this by first adding the number of days you will not be providing a service in the year. This will include weekends off, holidays, bank holidays, administration (about a day a week) and contingency days for any emergencies. Subtract this figure from the number of days in the year and this will give you your potential earning days. Then work out how many hours each day you will work and this will give you your total potential hours for the year.
To calculate your daily/hourly rate the formula is:
Business Overheads (fixed costs pa) + PSB (Personal Survival Budget - money taken out business to live on) divided by Number of days/hours available to sell = Cost per day/hour.
2 Cost of Product
The formula below applies if you are making something that you are going to sell.
Business Overheads + PSB divided by Production (the total number of items you produce) plus the Variable Cost per item (the variable costs per item will be know when you start making them) = Total Cost per item.
3 Mark up and margin
You business can only exist if you make a profit. The profit can either be as a percentage of the cost price or the selling price. If the profit is based on the cost price, it is known as Mark Up and can be expressed as follows:
Selling Price - Cost Price
--------------------------------------- x 100
Cost Price
If based on the selling price it is known as Margin and can be expressed as follow:
Selling Price - Cost Price
--------------------------------------- x 100
Selling Price
When pricing your product, make sure that the selling price provides an adequate margin to produce a profit.
There is no definitive method of setting a price. Your aim should be to set your prices initially at the level, which gives you your highest profits possible. Easier said than done!
Tips To Know When Setting Your Prices
Determining prices must be based on a broad, thoughtful basis. It requires a basic understanding of both your financial and business goals. Below are few principles to consider when you decide what prices to put on your product or service.
Keep your prices realistic. A realistic price is the price you set after taking into consideration various factors: the direction of your business, your cost structure and expenses, your resources and financial goals. Avoid setting your prices based on “what everybody is charging.” What is right for your competitors may not be profitable for your business. After all, their goals, strategies and financials may be different from yours. Research your competition and see what they are charging, but do not copy their pricing structure just to charge what everybody else is charging. Set your prices based on your own situation.
Cover all your costs. The price of your item should cover the costs associated with it, its contribution to the overhead, and profit. A successful pricing strategy is one that results in the most dollars after all your costs are met. Be careful in setting your prices too low: while it may attract a large sales volume, you may not be making enough revenue to cover the costs of selling the merchandise. If you set your prices too high, your sales volume may be so low you can't cover operating expenses.
Check your prices against inflation. Your prices must keep up with inflation. Inflation increases your cost of doing business, with the prices of your materials, overhead and other costs increasing. If you maintain your prices despite rising inflation, you will erode your profit margin. Allow your business to increase your prices at least once a year, but give your customers sufficient warning about the price increase. Once you’ve established your policies, constantly monitor your prices and operating costs to insure profit.
Include in your pricing the value of your time. Avoid committing the mistake of not including a salary for yourself, particularly if you are operating a service business. Your time is valuable, and you need to compute it in your pricing structure.
Customers are not always looking for the lowest price. Price is not always the topmost concern of customers. There are many customers who do not mind paying higher prices, particularly if they know that they are purchasing exclusive merchandise, or your business is located in a convenient or high-end location. Many customers are willing to pay premium prices for quality service: speedy delivery; helpful and friendly customer relations; excellent product knowledge, or satisfaction in handling complaints.
Price low, but smart. A common pricing strategy for small business. particularly new entrants into the market is to price low just to get the work. By pricing low, the aim is to penetrate the market and get as much repeat business.
However, be aware that pricing low can have adverse repercussions on your business. First, a low price may signal a low quality product and service. Be careful in setting prices too low. Second, it may be difficult to raise prices later on once customers are accustomed to your low prices. Third, your start-up business is yet to develop economies of scale that makes it hard to compete on price.
Use discounts with care. Offering discounts is a good strategy for encouraging repeat/bulk orders, bundling sales, and early payment of customers. Discounts also allow you to more quickly sell products with vanishing opportunity -- e.g. products with sell-by dates, seasonal and quick obsolescence like fashion and technology. You can also stimulate demand for your products during the times when your product/service is less popular. Discounts are also used to clear out merchandise that has become outdated.
Tuesday, December 1, 2009
Specifications of Service
Any service can be clearly, completely, consistently and concisely specified by means of the following 12 standard attributes which conform to the MECE principle (Mutually Exclusive, Collectively Exhaustive)
1.Service Consumer Benefits
2.Service-specific Functional Parameter(s)
3.Service Delivery Point
4.Service Consumer Count
5.Service Readiness Times
6.Service Support Times
7.Service Support Language(s)
8.Service Fulfillment Target
9.Maximum Impairment Duration per Incident
10.Service Delivering Duration
11.Service Delivery Unit
12.Service Delivering Price
The meaning and content of these attributes are:
•Service Consumer Benefits describe the (set of) benefits which are callable, receivable and effectively utilizable for any authorized service consumer and which are provided to him as soon as he requests the offered service. The description of these benefits must be phrased in the terms and wording of the intended service consumers.
•Service-specific Functional Parameters specify the functional parameters which are essential and unique to the respective service and which describe the most important dimension of the services cape, the service output or outcome, e.g. maximum e-mailbox capacity per registered and authorized e-mail service consumer.
•Service Delivery Point describes the physical location and/or logical interface where the benefits of the service are made accessible, callable, receivable and utilzable to the authorized service consumers. At this point and/or interface, the preparedness for service delivery can be assessed as well as the effective delivery of the service itself can be monitored and controlled.
•Service Consumer Count specifies the number of intended, identified, named, registered and authorized service consumers which shall be and/or are allowed and enabled to call and utilize the defined service for executing and/or supporting their business tasks or private activities.
•Service Readiness Times specify the distinct agreed times of day when
1)The described service consumer benefits are
I)Accessible and callable for the authorized service consumers at the defined service delivery point
ii)Receivable and utilizable for the authorized service consumers at the respective agreed service level
2)All service-relevant processes and resources are operative and effective
3)All service-relevant technical systems are up and running and attended by the operating team
4)The specified service benefits are comprehensively delivered to any authorized requesting service consumer without any delay or friction.
The time data are specified in 24 h format per local working day and local time, referring to the location of the intended service consumers.
•Service Support Times specify the determined and agreed times of day when the usage and consumption of commissioned services is supported by the service desk team for all identified, registered and authorized service consumers within the service customer's organizational unit or area. The service desk is/shall be the so called the Single Point of Contact (SPoC) for any service consumer inquiry regarding the commissioned, requested and/or delivered services, particularly in the event of service denial, i.e. an incident. During the defined service support times, the service desk can be reached by phone, e-mail, web-based entries and/or fax, respectively. The time data are specified in 24 h format per local working day and local time, referring to the location of the intended service consumers.
•Service Support Languages specifies the national languages which are spoken by the service desk team(s) to the service consumers calling them.
•Service Fulfillment Target specifies the service provider's promise of effective and seamless delivery of the defined benefits to any authorized service consumer requesting the service within the defined service times. It is expressed as the promised minimum ratio of the counts of successful individual service deliveries related to the counts of requested service deliveries. The effective service fulfillment ratio can be measured and calculated per single service consumer or per consumer group and may be referred to different time periods (workday, calenderweek, work month, etc.)
•Maximum Impairment Duration per Incident specifies the allowable maximum elapsing time [hh:mm] between
i)The first occurrence of a service impairment, i.e. service quality degradation or service delivery disruption, whilst the service consumer consumes and utilizes the requested service,
ii)The full resumption and complete execution of the service delivery to the content of the affected service consumer.
•Service Delivering Duration specifies the promised and agreed maximum period of time for effectively delivering all specified service consumer benefits to the requesting service consumer at the currently chosen service delivery point.
•Service Delivery Unit specifies the basic portion for delivering the defined service consumer benefits. The service delivery unit is the reference and mapping object for all cost for service generation and delivery as well as for charging and billing the consumed service volume to the service customer who has commissioned the service delivery.
•Service Delivering Price specifies the amount of money the service customer has to pay for the distinct service volumes his authorized service consumers have consumed. Normally, the service delivering price comprises two portions
a)A fixed basic price portion for basic efforts and resources which provide accessibility and usability of the service delivery functions, i.e. service access price
b)A price portion covering the service consumption based on
i)Fixed flat rate price per authorized service consumer and delivery period without regard on the consumed service volumes,
ii)Staged prices depending on consumed service volumes,
iii)Fixed price per particularly consumed service delivering unit.
1.Service Consumer Benefits
2.Service-specific Functional Parameter(s)
3.Service Delivery Point
4.Service Consumer Count
5.Service Readiness Times
6.Service Support Times
7.Service Support Language(s)
8.Service Fulfillment Target
9.Maximum Impairment Duration per Incident
10.Service Delivering Duration
11.Service Delivery Unit
12.Service Delivering Price
The meaning and content of these attributes are:
•Service Consumer Benefits describe the (set of) benefits which are callable, receivable and effectively utilizable for any authorized service consumer and which are provided to him as soon as he requests the offered service. The description of these benefits must be phrased in the terms and wording of the intended service consumers.
•Service-specific Functional Parameters specify the functional parameters which are essential and unique to the respective service and which describe the most important dimension of the services cape, the service output or outcome, e.g. maximum e-mailbox capacity per registered and authorized e-mail service consumer.
•Service Delivery Point describes the physical location and/or logical interface where the benefits of the service are made accessible, callable, receivable and utilzable to the authorized service consumers. At this point and/or interface, the preparedness for service delivery can be assessed as well as the effective delivery of the service itself can be monitored and controlled.
•Service Consumer Count specifies the number of intended, identified, named, registered and authorized service consumers which shall be and/or are allowed and enabled to call and utilize the defined service for executing and/or supporting their business tasks or private activities.
•Service Readiness Times specify the distinct agreed times of day when
1)The described service consumer benefits are
I)Accessible and callable for the authorized service consumers at the defined service delivery point
ii)Receivable and utilizable for the authorized service consumers at the respective agreed service level
2)All service-relevant processes and resources are operative and effective
3)All service-relevant technical systems are up and running and attended by the operating team
4)The specified service benefits are comprehensively delivered to any authorized requesting service consumer without any delay or friction.
The time data are specified in 24 h format per local working day and local time, referring to the location of the intended service consumers.
•Service Support Times specify the determined and agreed times of day when the usage and consumption of commissioned services is supported by the service desk team for all identified, registered and authorized service consumers within the service customer's organizational unit or area. The service desk is/shall be the so called the Single Point of Contact (SPoC) for any service consumer inquiry regarding the commissioned, requested and/or delivered services, particularly in the event of service denial, i.e. an incident. During the defined service support times, the service desk can be reached by phone, e-mail, web-based entries and/or fax, respectively. The time data are specified in 24 h format per local working day and local time, referring to the location of the intended service consumers.
•Service Support Languages specifies the national languages which are spoken by the service desk team(s) to the service consumers calling them.
•Service Fulfillment Target specifies the service provider's promise of effective and seamless delivery of the defined benefits to any authorized service consumer requesting the service within the defined service times. It is expressed as the promised minimum ratio of the counts of successful individual service deliveries related to the counts of requested service deliveries. The effective service fulfillment ratio can be measured and calculated per single service consumer or per consumer group and may be referred to different time periods (workday, calenderweek, work month, etc.)
•Maximum Impairment Duration per Incident specifies the allowable maximum elapsing time [hh:mm] between
i)The first occurrence of a service impairment, i.e. service quality degradation or service delivery disruption, whilst the service consumer consumes and utilizes the requested service,
ii)The full resumption and complete execution of the service delivery to the content of the affected service consumer.
•Service Delivering Duration specifies the promised and agreed maximum period of time for effectively delivering all specified service consumer benefits to the requesting service consumer at the currently chosen service delivery point.
•Service Delivery Unit specifies the basic portion for delivering the defined service consumer benefits. The service delivery unit is the reference and mapping object for all cost for service generation and delivery as well as for charging and billing the consumed service volume to the service customer who has commissioned the service delivery.
•Service Delivering Price specifies the amount of money the service customer has to pay for the distinct service volumes his authorized service consumers have consumed. Normally, the service delivering price comprises two portions
a)A fixed basic price portion for basic efforts and resources which provide accessibility and usability of the service delivery functions, i.e. service access price
b)A price portion covering the service consumption based on
i)Fixed flat rate price per authorized service consumer and delivery period without regard on the consumed service volumes,
ii)Staged prices depending on consumed service volumes,
iii)Fixed price per particularly consumed service delivering unit.
Tuesday, November 24, 2009
Service Marketing
Service Marketing is the marketing of intangible products, such as hairdressing, cleaning, insurance and travel.
Marketing a service-base business is different from marketing a goods-base business.
There are several major differences, including:
1.The buyer purchases are intangible
2.The service may be based on the reputation of a single person
3.It's more difficult to compare the quality of similar services
4.The buyer cannot return the service
Service
What is a Service?
•A service is the action of doing something for someone or something. It is largely intangible (i.e. not material). A product is tangible (i.e. material) since you can touch it and own it. A service tends to be an experience that is consumed at the point where it is purchased, and cannot be owned since is quickly perishes.
•The term Service is used in so many other industry buzwords, namely Web Services, Service Oriented Architecture (SOA), Enterprise Service Bus (ESB) and Application Service Provider (ASP). It's an extremely overloaded term. However, Services marketing is marketing based on relationship and value. It may be used to market a service or a product. It's a strange almost mythical combination of competing requirements, which is it is both isolated and interoperable.
Characteristics of a Service
There are five characteristics to a service which are considered below:
•Lack of ownership.
You cannot own and store a service like you can a product. Services are used or hired for a period of time. For example when buying a ticket to the UK the service lasts maybe 9 hours each way , but consumers want and expect excellent service for that time. Because you can measure the duration of the service consumers become more demanding of it.
•Intangibility
You cannot hold or touch a service unlike a product. In saying that although services are intangible the experience consumers obtain from the service has an impact on how they will perceive it. What do consumers perceive from customer service? The location and the inner presentation of where they are purchasing the service?
•Inseparability
Services cannot be separated from the service providers. A product when produced can be taken away from the producer. However a service is produced at or near the point of purchase. Take visiting a restaurant, you order your meal, the waiting and delivery of the meal, the service provided by the waiter/ress is all apart of the service production process and is inseparable, the staff in a restaurant are as apart of the process as well as the quality of food provided.
•Perishibility
Services last a specific time and cannot be stored like a product for later use. If travelling by train, coach or air the service will only last the duration of the journey. The service is developed and used almost simultaneously. Again because of this time constraint consumers demand more.
•Heterogeneity
It is very difficult to make each service experience identical. If travelling by plane the service quality may differ from the first time you travelled by that airline to the second, because the airhostess is more or less experienced.. Generally systems and procedures are put into place to make sure the service provided is consistent all the time, training in service organisations is essential for this, however in saying this there will always be subtle differences.
Marketing a service-base business is different from marketing a goods-base business.
There are several major differences, including:
1.The buyer purchases are intangible
2.The service may be based on the reputation of a single person
3.It's more difficult to compare the quality of similar services
4.The buyer cannot return the service
Service
What is a Service?
•A service is the action of doing something for someone or something. It is largely intangible (i.e. not material). A product is tangible (i.e. material) since you can touch it and own it. A service tends to be an experience that is consumed at the point where it is purchased, and cannot be owned since is quickly perishes.
•The term Service is used in so many other industry buzwords, namely Web Services, Service Oriented Architecture (SOA), Enterprise Service Bus (ESB) and Application Service Provider (ASP). It's an extremely overloaded term. However, Services marketing is marketing based on relationship and value. It may be used to market a service or a product. It's a strange almost mythical combination of competing requirements, which is it is both isolated and interoperable.
Characteristics of a Service
There are five characteristics to a service which are considered below:
•Lack of ownership.
You cannot own and store a service like you can a product. Services are used or hired for a period of time. For example when buying a ticket to the UK the service lasts maybe 9 hours each way , but consumers want and expect excellent service for that time. Because you can measure the duration of the service consumers become more demanding of it.
•Intangibility
You cannot hold or touch a service unlike a product. In saying that although services are intangible the experience consumers obtain from the service has an impact on how they will perceive it. What do consumers perceive from customer service? The location and the inner presentation of where they are purchasing the service?
•Inseparability
Services cannot be separated from the service providers. A product when produced can be taken away from the producer. However a service is produced at or near the point of purchase. Take visiting a restaurant, you order your meal, the waiting and delivery of the meal, the service provided by the waiter/ress is all apart of the service production process and is inseparable, the staff in a restaurant are as apart of the process as well as the quality of food provided.
•Perishibility
Services last a specific time and cannot be stored like a product for later use. If travelling by train, coach or air the service will only last the duration of the journey. The service is developed and used almost simultaneously. Again because of this time constraint consumers demand more.
•Heterogeneity
It is very difficult to make each service experience identical. If travelling by plane the service quality may differ from the first time you travelled by that airline to the second, because the airhostess is more or less experienced.. Generally systems and procedures are put into place to make sure the service provided is consistent all the time, training in service organisations is essential for this, however in saying this there will always be subtle differences.
Wednesday, November 11, 2009
How Effective and Efficient is your Pricing Strategies
First will shall be taken a look at the meaning of Pricing and Pricing Strategies this will gives us a good and clearer meaning of the topic.
What is pricing? Pricing is a method adopted by a firm to set its selling price. It usually depends on the firm's average costs, and on thecustomer's perceived value of the product in comparison to his or her perceived value of the competing products.
What is pricing Strategies? Price planning that takes into view factors such as a firm's overall marketing objectives, consumer demand, product attributes, competitors’ pricing, and market and economic trends.
The pricing strategy of your business can ultimately determine your fate. As a business owner you can ensure profitability and longevity by paying close attention to your pricing strategy.
Commonly, for many businesses, the pricing strategy has been to be the lowest price provider in the market. This approach comes from taking a superficial view of competitors and assuming one can win business by having the lowest price.
Below are some pricing strategies to consider.
Competitive pricing: Use competitors' retail (or wholesale) prices as a benchmark for your own prices. Price slightly below, above or the same as your competitors, depending on your positioning strategies. Note you must collect competitor pricing information by observation rather than by asking them. Otherwise it could be seen as collusion
•Cost plus mark-up: This is the opposite of competitive pricing. Instead of looking at the market, look at your own cost structure. Decide the profit you want to make and add it to your costs to determine selling price. While using this method will assure a certain per-unit margin, it may also result in prices that are out-of-line with customer expectations, hurting total profit.
•Loss Leader: A loss leader is an item you sell at or below cost in order to attract more customers, who will also buy high-profit items. This is a good short-term promotion technique if you have customers that purchase several items at one time.
•Close out: Keep this pricing technique in mind when you have excess inventory. Sell the inventory at a steep discount to avoid storing or discarding it. Your goal should be to minimize loss, rather than making a profit.
•Membership or trade discounting: This is one method of segmenting customers. Attract business from profitable customer segments by giving them special prices. This could be in the form of lower price on certain items, a blanket discount, or free product rewards.
•Bundling and quantity discounts: Other ways to reward people for larger purchases are through quantity discounts or bundling. Set the per-unit price lower when the customer purchases a quantity of five instead of one, for example. Alternately, charge less when the customer purchases a bundle or several related items at one time. Bundle overstocks with popular items to avoid a closeout. Or, bundle established items with a new product to help build awareness.
•Versioning: Versioning is popular with services or technical products, where you sell the same general product in two or three configurations. A trial or very basic version may be offered at low or no cost.
Avoiding the Lowest Pricing Strategy
Having the lowest price isn't a strong position for business. Larger competitors with deep pockets and the ability to have lower operating costs will destroy any small business trying to compete on price alone. Avoiding the low pricing strategy starts with looking at the demand in the market by examining three factors:
1. Competitive Analysis: Don't just look at your competitor's pricing. Look at the whole package they offer. Are they serving price-conscious consumers or the affluent group? What are the value-added services if any?
2. Ceiling Price: The ceiling price is the highest price the market will bear. Survey experts and customers to determine pricing limits. The highest price in the market may not be the ceiling price.
3. Price Elasticity: If the demand for your product or service is less elastic, you can then have a higher ceiling on prices. Low elastic demand depends on limited competitors, buyer's perception of quality, and consumers not habituated to looking for the lowest price in your industry.
The low price strategy is best avoided by small business but there are conditions such as a price war that can drag a company into the lowest price battle.
Evading a Price War
A price war can wreck havoc in any industry and leave many businesses, out of business. Care should be taken to avoid a Price war.
Take these tips to evade a deadly price war:
•Enhance Exclusivity: Products or services that are exclusive to your business provide protection from falling prices.
•Drop High Maintenance Goods: There may be products or services in your business that have high customer service and maintenance costs. Drop the unprofitable lines and find out what customers don't want.
•Value-added: Find value your business can add to stand out in the marketplace. Be the most unique business in the category.
•Branding: Develop your brand name in the market. Brand name businesses can always stand strong in a price war.
Carefully, consider your price decisions. Your business depends on it.
What is pricing? Pricing is a method adopted by a firm to set its selling price. It usually depends on the firm's average costs, and on thecustomer's perceived value of the product in comparison to his or her perceived value of the competing products.
What is pricing Strategies? Price planning that takes into view factors such as a firm's overall marketing objectives, consumer demand, product attributes, competitors’ pricing, and market and economic trends.
The pricing strategy of your business can ultimately determine your fate. As a business owner you can ensure profitability and longevity by paying close attention to your pricing strategy.
Commonly, for many businesses, the pricing strategy has been to be the lowest price provider in the market. This approach comes from taking a superficial view of competitors and assuming one can win business by having the lowest price.
Below are some pricing strategies to consider.
Competitive pricing: Use competitors' retail (or wholesale) prices as a benchmark for your own prices. Price slightly below, above or the same as your competitors, depending on your positioning strategies. Note you must collect competitor pricing information by observation rather than by asking them. Otherwise it could be seen as collusion
•Cost plus mark-up: This is the opposite of competitive pricing. Instead of looking at the market, look at your own cost structure. Decide the profit you want to make and add it to your costs to determine selling price. While using this method will assure a certain per-unit margin, it may also result in prices that are out-of-line with customer expectations, hurting total profit.
•Loss Leader: A loss leader is an item you sell at or below cost in order to attract more customers, who will also buy high-profit items. This is a good short-term promotion technique if you have customers that purchase several items at one time.
•Close out: Keep this pricing technique in mind when you have excess inventory. Sell the inventory at a steep discount to avoid storing or discarding it. Your goal should be to minimize loss, rather than making a profit.
•Membership or trade discounting: This is one method of segmenting customers. Attract business from profitable customer segments by giving them special prices. This could be in the form of lower price on certain items, a blanket discount, or free product rewards.
•Bundling and quantity discounts: Other ways to reward people for larger purchases are through quantity discounts or bundling. Set the per-unit price lower when the customer purchases a quantity of five instead of one, for example. Alternately, charge less when the customer purchases a bundle or several related items at one time. Bundle overstocks with popular items to avoid a closeout. Or, bundle established items with a new product to help build awareness.
•Versioning: Versioning is popular with services or technical products, where you sell the same general product in two or three configurations. A trial or very basic version may be offered at low or no cost.
Avoiding the Lowest Pricing Strategy
Having the lowest price isn't a strong position for business. Larger competitors with deep pockets and the ability to have lower operating costs will destroy any small business trying to compete on price alone. Avoiding the low pricing strategy starts with looking at the demand in the market by examining three factors:
1. Competitive Analysis: Don't just look at your competitor's pricing. Look at the whole package they offer. Are they serving price-conscious consumers or the affluent group? What are the value-added services if any?
2. Ceiling Price: The ceiling price is the highest price the market will bear. Survey experts and customers to determine pricing limits. The highest price in the market may not be the ceiling price.
3. Price Elasticity: If the demand for your product or service is less elastic, you can then have a higher ceiling on prices. Low elastic demand depends on limited competitors, buyer's perception of quality, and consumers not habituated to looking for the lowest price in your industry.
The low price strategy is best avoided by small business but there are conditions such as a price war that can drag a company into the lowest price battle.
Evading a Price War
A price war can wreck havoc in any industry and leave many businesses, out of business. Care should be taken to avoid a Price war.
Take these tips to evade a deadly price war:
•Enhance Exclusivity: Products or services that are exclusive to your business provide protection from falling prices.
•Drop High Maintenance Goods: There may be products or services in your business that have high customer service and maintenance costs. Drop the unprofitable lines and find out what customers don't want.
•Value-added: Find value your business can add to stand out in the marketplace. Be the most unique business in the category.
•Branding: Develop your brand name in the market. Brand name businesses can always stand strong in a price war.
Carefully, consider your price decisions. Your business depends on it.
Tuesday, November 10, 2009
Benchmarking
What is Benchmarking?
•Benchmarking is the process of comparing the business processes and performance metrics including cost, cycle time, productivity, or quality to another that is widely considered to be an industry standard benchmark or best practice. Essentially, benchmarking provides a snapshot of the performance of your business and helps you understand where you are in relation to a particular standard. The result is often a business case and "Burning Platform" for making changes in order to make improvements.
•Benchmarking can be simply defined as a continuous process to find and implement best practices that will lead to superior performance. As the definition implies, benchmarking is a process that will make a company s operations lean, and improve quality and productivity
In the quest for increased competitiveness, companies often ask themselves the question, "How are we doing?" Asking this question leads logically to the next question, "Compared to what?" To fully answer this second question involves an examination of a company's own operations, and subsequently comparing the operations with those of other organisations identified to be leaders in the field. Such comparisons are at the heart of benchmarking.
There are three major reasons for an organisation to embark upon benchmarking. These are:
•Benchmarking provides an objective evaluation of a company's business processes against similar processes in other organisations
•Benchmarking serves as a vehicle to source for improvement ideas from other organisations
•Benchmarking broadens an organisation's experience base by providing insights into systems and methods that work and those that don't. It therefore supports the notion of a learning organisation.
The benchmarking process can be applied to all facets of a company's business, be it in products, services or business processes. However, the focus of most benchmarking projects is on business processes because the effective management of these processes, including quality, speed, and service, is of vital importance to achieve superior performance and he more competitive.
There is no single benchmarking process that has been universally adopted. The wide appeal and acceptance of benchmarking has led to various benchmarking methodologies emerging. The first book on benchmarking, written by Kaiser Associates, offered a 7-step approach. Robert Camp (who wrote one of the earliest books on benchmarking in 1989) developed a 12-stage approach to benchmarking.
The 12 stage methodology consisted of 1. Select subject ahead 2. Define the process 3. Identify potential partners 4. Identify data sources 5. Collect data and select partners 6. Determine the gap 7. Establish process differences 8. Target future performance 9. Communicate 10. Adjust goal 11. Implement 12. Review/recalibrate
Types of Benchmarking
Depending on the objectives and scope of benchmarking, different types of benchmarking processes can be distinguished depending on what is compared and to whom it is being compared.
Let take a good look at this.
Benchmarking of What?
•Performance benchmarking is a brief evaluation process that compares company performance measures against a standard or target that has been established, or performance data of other organisations.
•Process benchmarking analyses and compares the methods and practices of a participating company's processes in order that another company can learn from the best and improve their own processes. In effect, it involves the identification of best practices that lie behind superior performance.
•Strategic benchmarking is an in-depth analysis aimed at identifying fundamental areas for improvement, i.e. a company's strengths and weak points. Information concerning other company's strategic choices is collected in order to improve a company's own strategic planning and positioning.
Benchmarking against Whom?
•Internal benchmarking is the comparison between a company's different departments, units or subsidiaries, including those based in different countries.
•Competitive benchmarking entails the direct comparison of a company's own performance against its competitors. This is easier in some respects because many external factors that affect performance are similar between the benchmarked companies, but it may be more difficult because of the competitive relationship between the companies, which can make data collection difficult.
•Functional benchmarking involves the comparison of processes against non-competitor companies within the same industry or service area that share common technological or market characteristics. Compared to competitive benchmarking, it is easier to find benchmarking partners, since the relationship between companies is not one of direct competition.
Generic benchmarking focuses on the comparison of a company's own processes against best processes, irrespective of industry or service sector. It studies innovative methods or technologies with the aim of identifying technologies that will lead to breakthroughs. This is particularly relevant for environmental benchmarking because best environmental practices are rarely industry-specific.
Benefits from Benchmarking
•Improving communication
•Professionalizing the organization / processes, or for
•Budgetary reasons
•In outsourcing projects
Cost of Benchmarking
There are costs to benchmarking, although many companies find that it pays for itself. The three main types of costs are:
•Visit costs - This includes hotel rooms, travel costs, meals, a token gift, and lost labour time.
•Time costs - Members of the benchmarking team will be investing time in researching problems, finding exceptional companies to study, visits, and implementation. This will take them away from their regular tasks for part of each day so additional staff might be required.
•Benchmarking database costs - Organizations that institutionalize benchmarking into their daily procedures find it is useful to create and maintain a database of best practices and the companies associated with each best practice
Limitation of Benchmarking
•Benchmarking is a tough process that needs a lot of commitment to succeed.
•Time-consuming and expensive.
•More than once benchmarking projects end with the 'they are different from us' syndrome or competitive sensitivity prevents the free flow of information that is necessary.
•Comparing performances and processes with 'best in class' is important and should ideally be done on a continuous basis (the competition is improving its processes also...).
•Is the success of the target company really attributable to the practice that is benchmarked? Are the companies comparable in strategy, size, model, culture?
•What are the downsides of adopting a practice?
•Benchmarking is the process of comparing the business processes and performance metrics including cost, cycle time, productivity, or quality to another that is widely considered to be an industry standard benchmark or best practice. Essentially, benchmarking provides a snapshot of the performance of your business and helps you understand where you are in relation to a particular standard. The result is often a business case and "Burning Platform" for making changes in order to make improvements.
•Benchmarking can be simply defined as a continuous process to find and implement best practices that will lead to superior performance. As the definition implies, benchmarking is a process that will make a company s operations lean, and improve quality and productivity
In the quest for increased competitiveness, companies often ask themselves the question, "How are we doing?" Asking this question leads logically to the next question, "Compared to what?" To fully answer this second question involves an examination of a company's own operations, and subsequently comparing the operations with those of other organisations identified to be leaders in the field. Such comparisons are at the heart of benchmarking.
There are three major reasons for an organisation to embark upon benchmarking. These are:
•Benchmarking provides an objective evaluation of a company's business processes against similar processes in other organisations
•Benchmarking serves as a vehicle to source for improvement ideas from other organisations
•Benchmarking broadens an organisation's experience base by providing insights into systems and methods that work and those that don't. It therefore supports the notion of a learning organisation.
The benchmarking process can be applied to all facets of a company's business, be it in products, services or business processes. However, the focus of most benchmarking projects is on business processes because the effective management of these processes, including quality, speed, and service, is of vital importance to achieve superior performance and he more competitive.
There is no single benchmarking process that has been universally adopted. The wide appeal and acceptance of benchmarking has led to various benchmarking methodologies emerging. The first book on benchmarking, written by Kaiser Associates, offered a 7-step approach. Robert Camp (who wrote one of the earliest books on benchmarking in 1989) developed a 12-stage approach to benchmarking.
The 12 stage methodology consisted of 1. Select subject ahead 2. Define the process 3. Identify potential partners 4. Identify data sources 5. Collect data and select partners 6. Determine the gap 7. Establish process differences 8. Target future performance 9. Communicate 10. Adjust goal 11. Implement 12. Review/recalibrate
Types of Benchmarking
Depending on the objectives and scope of benchmarking, different types of benchmarking processes can be distinguished depending on what is compared and to whom it is being compared.
Let take a good look at this.
Benchmarking of What?
•Performance benchmarking is a brief evaluation process that compares company performance measures against a standard or target that has been established, or performance data of other organisations.
•Process benchmarking analyses and compares the methods and practices of a participating company's processes in order that another company can learn from the best and improve their own processes. In effect, it involves the identification of best practices that lie behind superior performance.
•Strategic benchmarking is an in-depth analysis aimed at identifying fundamental areas for improvement, i.e. a company's strengths and weak points. Information concerning other company's strategic choices is collected in order to improve a company's own strategic planning and positioning.
Benchmarking against Whom?
•Internal benchmarking is the comparison between a company's different departments, units or subsidiaries, including those based in different countries.
•Competitive benchmarking entails the direct comparison of a company's own performance against its competitors. This is easier in some respects because many external factors that affect performance are similar between the benchmarked companies, but it may be more difficult because of the competitive relationship between the companies, which can make data collection difficult.
•Functional benchmarking involves the comparison of processes against non-competitor companies within the same industry or service area that share common technological or market characteristics. Compared to competitive benchmarking, it is easier to find benchmarking partners, since the relationship between companies is not one of direct competition.
Generic benchmarking focuses on the comparison of a company's own processes against best processes, irrespective of industry or service sector. It studies innovative methods or technologies with the aim of identifying technologies that will lead to breakthroughs. This is particularly relevant for environmental benchmarking because best environmental practices are rarely industry-specific.
Benefits from Benchmarking
•Improving communication
•Professionalizing the organization / processes, or for
•Budgetary reasons
•In outsourcing projects
Cost of Benchmarking
There are costs to benchmarking, although many companies find that it pays for itself. The three main types of costs are:
•Visit costs - This includes hotel rooms, travel costs, meals, a token gift, and lost labour time.
•Time costs - Members of the benchmarking team will be investing time in researching problems, finding exceptional companies to study, visits, and implementation. This will take them away from their regular tasks for part of each day so additional staff might be required.
•Benchmarking database costs - Organizations that institutionalize benchmarking into their daily procedures find it is useful to create and maintain a database of best practices and the companies associated with each best practice
Limitation of Benchmarking
•Benchmarking is a tough process that needs a lot of commitment to succeed.
•Time-consuming and expensive.
•More than once benchmarking projects end with the 'they are different from us' syndrome or competitive sensitivity prevents the free flow of information that is necessary.
•Comparing performances and processes with 'best in class' is important and should ideally be done on a continuous basis (the competition is improving its processes also...).
•Is the success of the target company really attributable to the practice that is benchmarked? Are the companies comparable in strategy, size, model, culture?
•What are the downsides of adopting a practice?
Wednesday, November 4, 2009
Porter’s Five Forces Analysis
Porter's five forces analysis is a framework for the industry analysis and business strategy development developed by Michael E. Porter of Harvard Business School in 1979. It uses concepts developed in Industrial Organization (IO) economics to derive five forces which determine the competitive intensity and therefore attractiveness of a market. Attractiveness in this context refers to the overall industry profitability. An "unattractive" industry is one where the combination of forces acts to drive down overall profitability. A very unattractive industry would be one approaching "pure competition".
Porter referred to these forces as the micro environment, to contrast it with the more general term macro environment. They consist of those forces close to a company that affect its ability to serve its customers and make a profit. A change in any of the forces normally requires a company to re-assess the marketplace. The overall industry attractiveness does not imply that every firm in the industry will return the same profitability. Firms are able to apply their core competences, business model or network to achieve a profit above the industry average. A clear example of this is the airline industry. As an industry, profitability is low and yet individual companies, by applying unique business models have been able to make a return in excess of the industry average.
Main Aspects of Porter’s Five Forces Analysis
The original competitive forces model, as proposed by Porter, identified five forces which would impact on an organization’s behaviour in a competitive market. These include the following:
•The rivalry between existing sellers in the market.
•The power exerted by the customers in the market.
•The impact of the suppliers on the sellers.
•The potential threat of new sellers entering the market.
•The threat of substitute products becoming available in the market.
Understanding the nature of each of these forces gives organizations the necessary insights to enable them to formulate the appropriate strategies to be successful in their market.
Force 1: The Degree of Rivalry
The intensity of rivalry, which is the most obvious of the five forces in an industry, helps determine the extent to which the value created by an industry will be dissipated through head-to-head competition. The most valuable contribution of Porter's “five forces” framework in this issue may be its suggestion that rivalry, while important, is only one of several forces that determine industry attractiveness.
•This force is located at the centre of the diagram;
•Is most likely to be high in those industries where there is a threat of substitute products; and existing power of suppliers and buyers in the market.
Force 2: The Threat of Entry
Both potential and existing competitors influence average industry profitability. The threat of new entrants is usually based on the market entry barriers. They can take diverse forms and are used to prevent an influx of firms into an industry whenever profits, adjusted for the cost of capital, rise above zero. In contrast, entry barriers exist whenever it is difficult or not economically feasible for an outsider to replicate the incumbents’ position (Porter, 1980b; Sanderson, 1998) The most common forms of entry barriers, except intrinsic physical or legal obstacles, are as follows:
•Economies of scale: for example, benefits associated with bulk purchasing;
•Cost of entry: for example, investment into technology;
•Distribution channels: for example, ease of access for competitors;
•Cost advantages not related to the size of the company: for example, contacts and expertise;
•Government legislations: for example, introduction of new laws might weaken company’s competitive position;
•Differentiation: for example, certain brand that cannot be copied (The Champagne)
Force 3: The Threat of Substitutes
The threat that substitute products pose to an industry's profitability depends on the relative price-to-performance ratios of the different types of products or services to which customers can turn to satisfy the same basic need. The threat of substitution is also affected by switching costs – that is, the costs in areas such as retraining, retooling and redesigning that are incurred when a customer switches to a different type of product or service. It also involves:
•Product-for-product substitution (email for mail, fax); is based on the substitution of need;
•Generic substitution (Video suppliers compete with travel companies);
•Substitution that relates to something that people can do without (cigarettes, alcohol).
Force 4: Buyer Power
Buyer power is one of the two horizontal forces that influence the appropriation of the value created by an industry (refer to the diagram). The most important determinants of buyer power are the size and the concentration of customers. Other factors are the extent to which the buyers are informed and the concentration or differentiation of the competitors. Kippenberger (1998) states that it is often useful to distinguish potential buyer power from the buyer's willingness or incentive to use that power, willingness that derives mainly from the “risk of failure” associated with a product's use.
•This force is relatively high where there a few, large players in the market, as it is the case with retailers an grocery stores;
•Present where there is a large number of undifferentiated, small suppliers, such as small farming businesses supplying large grocery companies;
•Low cost of switching between suppliers, such as from one fleet supplier of trucks to another.
Force 5: Supplier Power
Supplier power is a mirror image of the buyer power. As a result, the analysis of supplier power typically focuses first on the relative size and concentration of suppliers relative to industry participants and second on the degree of differentiation in the inputs supplied. The ability to charge customers different prices in line with differences in the value created for each of those buyers usually indicates that the market is characterized by high supplier power and at the same time by low buyer power (Porter, 1998). Bargaining power of suppliers exists in the following situations:
•Where the switching costs are high (switching from one Internet provider to another);
•High power of brands (McDonalds, British Airways, Tesco);
•Possibility of forward integration of suppliers (Brewers buying bars);
•Fragmentation of customers (not in clusters) with a limited bargaining power (Gas/Petrol stations in remote places).
The nature of competition in an industry is strongly affected by suggested five forces. The stronger the power of buyers and suppliers, and the stronger the threats of entry and substitution, the more intense competition is likely to be within the industry. However, these five factors are not the only ones that determine how firms in an industry will compete – the structure of the industry itself may play an important role. Indeed, the whole five-forces framework is based on an economic theory know as the “Structure-Conduct-Performance” (SCP) model: the structure of an industry determines organizations’ competitive behaviour (conduct), which in turn determines their profitability (performance). In concentrated industries, according to this model, organizations would be expected to compete less fiercely, and make higher profits, than in fragmented ones. However, as Haberberg and Rieple (2001) state, the histories and cultures of the firms in the industry also play a very important role in shaping competitive behaviour, and the predictions of the SCP model need to be modified accordingly.
Strengths of the Five Competitive Forces Model Benefits
•The model is a strong tool for competitive analysis at industry level. Compare: PEST Analysis
•It provides useful input for performing a SWOT Analysis.
Limitation of Porter’s Five Forces Model
•Care should be taken when using this model for the following: do not underestimate or underemphasize the importance of the (existing) strengths of the organization (Inside-out strategy).
•The model was designed for analyzing individual business strategies. It does not cope with synergies and interdependencies within the portfolio of large corporations.
•From a more theoretical perspective, the model does not address the possibility that an industry could be attractive because certain companies are in it.
•Some people claim that environments which are characterized by rapid, systemic and radical change require more flexible, dynamic or emergent approaches to strategy formulation. Sometimes it may be possible to create completely new markets instead of selecting from existing ones.
Porter's Six Forces model and its relationship to the standard Five Forces model
Porter’s Five Forces model actually has an extension referred to as Porter’s Six Forces model. It is considerably less popular than the Five Forces model as its acceptance has been less positive than the Five Forces model. The Six Forces model though is very similar to the Five Forces model with the only difference being the addition of the sixth force in the framework. This sixth force in the model is termed as the relative power of other stakeholders, and can refer to a number of other groups or entities, depending on the factor which has the greatest influence including:
• Complementors – One school of thought looks at the sixth force to be complementors, which are businesses offering complementary products to the sector in focus and being analysed (Grove 1996). The author states that these complementary businesses, as a sixth factor, affect the industry as changes in these businesses (such as new techniques, approaches or technologies) can impact on the dynamics between the industry and the complementors.
• The government – The sixth force in the framework can also be considered to be the government, and is included in the framework if it has potential to impact on all the other five forces (Gordon, 1997). Thus, the government can have direct impact in the industry as the sixth force, but can also have indirect impact or influence by affecting the other five forces, whether favourably or unfavourably.
• The public – Yet other viewpoints look at the public as the sixth force in the model, particularly if the public has a strong influence in the dynamics of the sector resulting in changes to the other forces or in the sector as a whole.
• Shareholders – This group can also be considered potentially as the sixth force. This is more important in recent years where shareholder activity has increased significantly in the boardroom, and management of firms has been scrutinised much more and even given ‘threats’ if certain actions favoured by the shareholders were not pursued.
• Employees – Employees could also be considered as the sixth force if they wielded extraordinarily strong influence on the firm in a particular sector. The status of employees seems to follow similar rules in certain sectors, and thus could be considered a strong influence in these sectors. For example, in the automobile sector in the US, a large part of the work force are unionised, and thus could be considered the sixth force instead of the government or complementors.
While a sixth force has been added to Porter’s original Five Forces model, the acceptance of this framework has been somewhat limited. This could be for two reasons. First, is that there is no definite and specific sixth force in all sectors, as it is different for each sector. Second, while a sixth force could be defined for all sectors, the influence of this factor can also be captured in the other five forces and thus the necessity of having it in the framework is less compelling.
Porter referred to these forces as the micro environment, to contrast it with the more general term macro environment. They consist of those forces close to a company that affect its ability to serve its customers and make a profit. A change in any of the forces normally requires a company to re-assess the marketplace. The overall industry attractiveness does not imply that every firm in the industry will return the same profitability. Firms are able to apply their core competences, business model or network to achieve a profit above the industry average. A clear example of this is the airline industry. As an industry, profitability is low and yet individual companies, by applying unique business models have been able to make a return in excess of the industry average.
Main Aspects of Porter’s Five Forces Analysis
The original competitive forces model, as proposed by Porter, identified five forces which would impact on an organization’s behaviour in a competitive market. These include the following:
•The rivalry between existing sellers in the market.
•The power exerted by the customers in the market.
•The impact of the suppliers on the sellers.
•The potential threat of new sellers entering the market.
•The threat of substitute products becoming available in the market.
Understanding the nature of each of these forces gives organizations the necessary insights to enable them to formulate the appropriate strategies to be successful in their market.
Force 1: The Degree of Rivalry
The intensity of rivalry, which is the most obvious of the five forces in an industry, helps determine the extent to which the value created by an industry will be dissipated through head-to-head competition. The most valuable contribution of Porter's “five forces” framework in this issue may be its suggestion that rivalry, while important, is only one of several forces that determine industry attractiveness.
•This force is located at the centre of the diagram;
•Is most likely to be high in those industries where there is a threat of substitute products; and existing power of suppliers and buyers in the market.
Force 2: The Threat of Entry
Both potential and existing competitors influence average industry profitability. The threat of new entrants is usually based on the market entry barriers. They can take diverse forms and are used to prevent an influx of firms into an industry whenever profits, adjusted for the cost of capital, rise above zero. In contrast, entry barriers exist whenever it is difficult or not economically feasible for an outsider to replicate the incumbents’ position (Porter, 1980b; Sanderson, 1998) The most common forms of entry barriers, except intrinsic physical or legal obstacles, are as follows:
•Economies of scale: for example, benefits associated with bulk purchasing;
•Cost of entry: for example, investment into technology;
•Distribution channels: for example, ease of access for competitors;
•Cost advantages not related to the size of the company: for example, contacts and expertise;
•Government legislations: for example, introduction of new laws might weaken company’s competitive position;
•Differentiation: for example, certain brand that cannot be copied (The Champagne)
Force 3: The Threat of Substitutes
The threat that substitute products pose to an industry's profitability depends on the relative price-to-performance ratios of the different types of products or services to which customers can turn to satisfy the same basic need. The threat of substitution is also affected by switching costs – that is, the costs in areas such as retraining, retooling and redesigning that are incurred when a customer switches to a different type of product or service. It also involves:
•Product-for-product substitution (email for mail, fax); is based on the substitution of need;
•Generic substitution (Video suppliers compete with travel companies);
•Substitution that relates to something that people can do without (cigarettes, alcohol).
Force 4: Buyer Power
Buyer power is one of the two horizontal forces that influence the appropriation of the value created by an industry (refer to the diagram). The most important determinants of buyer power are the size and the concentration of customers. Other factors are the extent to which the buyers are informed and the concentration or differentiation of the competitors. Kippenberger (1998) states that it is often useful to distinguish potential buyer power from the buyer's willingness or incentive to use that power, willingness that derives mainly from the “risk of failure” associated with a product's use.
•This force is relatively high where there a few, large players in the market, as it is the case with retailers an grocery stores;
•Present where there is a large number of undifferentiated, small suppliers, such as small farming businesses supplying large grocery companies;
•Low cost of switching between suppliers, such as from one fleet supplier of trucks to another.
Force 5: Supplier Power
Supplier power is a mirror image of the buyer power. As a result, the analysis of supplier power typically focuses first on the relative size and concentration of suppliers relative to industry participants and second on the degree of differentiation in the inputs supplied. The ability to charge customers different prices in line with differences in the value created for each of those buyers usually indicates that the market is characterized by high supplier power and at the same time by low buyer power (Porter, 1998). Bargaining power of suppliers exists in the following situations:
•Where the switching costs are high (switching from one Internet provider to another);
•High power of brands (McDonalds, British Airways, Tesco);
•Possibility of forward integration of suppliers (Brewers buying bars);
•Fragmentation of customers (not in clusters) with a limited bargaining power (Gas/Petrol stations in remote places).
The nature of competition in an industry is strongly affected by suggested five forces. The stronger the power of buyers and suppliers, and the stronger the threats of entry and substitution, the more intense competition is likely to be within the industry. However, these five factors are not the only ones that determine how firms in an industry will compete – the structure of the industry itself may play an important role. Indeed, the whole five-forces framework is based on an economic theory know as the “Structure-Conduct-Performance” (SCP) model: the structure of an industry determines organizations’ competitive behaviour (conduct), which in turn determines their profitability (performance). In concentrated industries, according to this model, organizations would be expected to compete less fiercely, and make higher profits, than in fragmented ones. However, as Haberberg and Rieple (2001) state, the histories and cultures of the firms in the industry also play a very important role in shaping competitive behaviour, and the predictions of the SCP model need to be modified accordingly.
Strengths of the Five Competitive Forces Model Benefits
•The model is a strong tool for competitive analysis at industry level. Compare: PEST Analysis
•It provides useful input for performing a SWOT Analysis.
Limitation of Porter’s Five Forces Model
•Care should be taken when using this model for the following: do not underestimate or underemphasize the importance of the (existing) strengths of the organization (Inside-out strategy).
•The model was designed for analyzing individual business strategies. It does not cope with synergies and interdependencies within the portfolio of large corporations.
•From a more theoretical perspective, the model does not address the possibility that an industry could be attractive because certain companies are in it.
•Some people claim that environments which are characterized by rapid, systemic and radical change require more flexible, dynamic or emergent approaches to strategy formulation. Sometimes it may be possible to create completely new markets instead of selecting from existing ones.
Porter's Six Forces model and its relationship to the standard Five Forces model
Porter’s Five Forces model actually has an extension referred to as Porter’s Six Forces model. It is considerably less popular than the Five Forces model as its acceptance has been less positive than the Five Forces model. The Six Forces model though is very similar to the Five Forces model with the only difference being the addition of the sixth force in the framework. This sixth force in the model is termed as the relative power of other stakeholders, and can refer to a number of other groups or entities, depending on the factor which has the greatest influence including:
• Complementors – One school of thought looks at the sixth force to be complementors, which are businesses offering complementary products to the sector in focus and being analysed (Grove 1996). The author states that these complementary businesses, as a sixth factor, affect the industry as changes in these businesses (such as new techniques, approaches or technologies) can impact on the dynamics between the industry and the complementors.
• The government – The sixth force in the framework can also be considered to be the government, and is included in the framework if it has potential to impact on all the other five forces (Gordon, 1997). Thus, the government can have direct impact in the industry as the sixth force, but can also have indirect impact or influence by affecting the other five forces, whether favourably or unfavourably.
• The public – Yet other viewpoints look at the public as the sixth force in the model, particularly if the public has a strong influence in the dynamics of the sector resulting in changes to the other forces or in the sector as a whole.
• Shareholders – This group can also be considered potentially as the sixth force. This is more important in recent years where shareholder activity has increased significantly in the boardroom, and management of firms has been scrutinised much more and even given ‘threats’ if certain actions favoured by the shareholders were not pursued.
• Employees – Employees could also be considered as the sixth force if they wielded extraordinarily strong influence on the firm in a particular sector. The status of employees seems to follow similar rules in certain sectors, and thus could be considered a strong influence in these sectors. For example, in the automobile sector in the US, a large part of the work force are unionised, and thus could be considered the sixth force instead of the government or complementors.
While a sixth force has been added to Porter’s original Five Forces model, the acceptance of this framework has been somewhat limited. This could be for two reasons. First, is that there is no definite and specific sixth force in all sectors, as it is different for each sector. Second, while a sixth force could be defined for all sectors, the influence of this factor can also be captured in the other five forces and thus the necessity of having it in the framework is less compelling.
Ecological Model of Competition
The ecological model of competition is a reassessment of the nature of competition in the economy. Traditional economics models the economy on the principles of physics (force, equilibrium, inertia, momentum, and linear relationships). This can be seen in the economics lexicon: terms like labour force, market equilibrium, capital flows, and price elasticity. This is probably due to historical coincidence. Classical Newtonian physics was the state of the art in science when Adam Smith was formulating the first principles of economics in the 1700s.
According to the ecological model, it is more appropriate to model the economy on biology (growth, change, death, evolution, survival of the fittest, complex inter-relationships, and non-linear relationships). Businesses operate in a complex environment with interlinked sets of determinants. Companies co-evolve: they influence, and are influenced by, competitors, customers, governments, investors, suppliers, unions, distributors, banks, and others. We should look at this business environment as a business ecosystem that both sustains, and threatens the firm. A company that is not well matched to its environment might not survive. Companies that are able to develop a successful business model and turn a core competency into a sustainable competitive advantage will thrive and grow. Very successful firms may come to dominate their industry (referred to as category killers).
According to the ecological model, it is more appropriate to model the economy on biology (growth, change, death, evolution, survival of the fittest, complex inter-relationships, and non-linear relationships). Businesses operate in a complex environment with interlinked sets of determinants. Companies co-evolve: they influence, and are influenced by, competitors, customers, governments, investors, suppliers, unions, distributors, banks, and others. We should look at this business environment as a business ecosystem that both sustains, and threatens the firm. A company that is not well matched to its environment might not survive. Companies that are able to develop a successful business model and turn a core competency into a sustainable competitive advantage will thrive and grow. Very successful firms may come to dominate their industry (referred to as category killers).
Monday, October 26, 2009
SWOT Analysis
Definition
SWOT is an abbreviation for Strengths, Weaknesses, Opportunities and Threats
An assessment of Strengths, Weaknesses, Opportunities, and Threats. SWOT analysis is used within organizations in the early stages of strategic and marketing planning. It is also used in problem solving, decision making, or for making staff aware of the need for change. It can be used at a personal level when examining your career path or determining possible career development
SWOT analysis is an important tool for auditing the overall strategic position of a business and its environment.
Once key strategic issues have been identified, they feed into business objectives, particularly marketing objectives. SWOT analysis can be used in conjunction with other tools for audit and analysis, such as PEST analysis and Porter's Five-Forces analysis. It is also a very popular tool with business and marketing students because it is quick and easy to learn
The Key Distinction - Internal and External Issues
Strengths and weaknesses are Internal factors. For example, strength could be your specialist marketing expertise. A weakness could be the lack of a new product.
Opportunities and threats are External factors. For example, an opportunity could be a developing distribution channel such as the Internet, or changing consumer lifestyles that potentially increase demand for a company's products. A threat could be a new competitor in an important existing market or a technological change that makes existing products potentially obsolete.
It is worth pointing out that SWOT analysis can be very subjective - two people rarely come-up with the same version of a SWOT analysis even when given the same information about the same business and its environment. Accordingly, SWOT analysis is best used as a guide and not a prescription. Adding and weighting criteria to each factor increases the validity of the analysis.
Where is S.W.O.T. being applied?
S.W.O.T. Analysis as it may sometimes being called can be performed in a variety of application or situation. It can be used as a situation analysis as an input into a strategic planning process at corporate of company level. It can also apply to evaluate the situation in terms of its capabilities. We use S.W.O.T. as a situation analysis tool.
When do we Perform a S.W.O.T. Analysis?
In common practice, S.W.O.T. Analysis is performed during the Strategic Planning or Business budget session normally done at the end of a financial year. But to perform a S.W.O.T. should not be limited to a yearly affair. You may perform a S.W.O.T. Analysis whenever it is needed to help you to identify causes of a non-conformance and you needed a new solution or strategy.
Who would Perform a S.W.O.T. Analysis?
In most cases, leaders of an organization perform a S.W.O.T. Analysis. However, it should not be limited to this group of people. In fact, anyone who has an interest and trained can perform a S.W.O.T. Analysis for the situation they are in. I have many situations where heads of a department perform a S.W.O.T. Analysis for their own operation issues because they want to develop solutions based on facts.
Why do you need to Perform a S.W.O.T. Analysis?
As it can be seen by now, data gathering is an essential part of S.W.O.T. Analysis. Hence, the information collected is likely to be more factual. Any solution derived from S.W.O.T. will be more realistic and reliable.
How to Perform a S.W.O.T. Analysis?
As data collection is one of the key activities in S.W.O.T. analysis, it should allow enough time to bring back the data. 1-3 month before a S.W.O.T. Analysis session is conducted. Once the data is collected, it should be grouped into the four factors. This can be done individually or in a team.
In summary, with some basic understanding of S.W.O.T. Analysis, the solution derived from it can be value add to the organization.
How SWOT Analysis is used to formulate Strategies
This is perhaps the most powerful usage of SWOT Analysis in the Strategic Planning Process. I am going to show you how to used the four factors of SWOT to develop Strategies
By now, you would have collected several data pertaining to the Strengths, Weaknesses, Opportunities and Threats. Then you will use them to formulate strategy. Not sure how to do it? Don't worry, I take you through the steps.
Step 1 – Evaluate the Surrounding
Let's take a moment to think about both of us as the coach for two teams of football teams.
Before the game starts, you and I have certain strategies that we want the team to follow. As the game progresses, there is sign of difference between the two teams in terms of the game as well as the condition of the team members.
Step 2 – Identify the Strengths, Weaknesses, Opportunities and Threats
Now, it is time to evaluate the teams in the four factors of SWOT. Let’s take the following examples as the result of the evaluation:-
Strengths - Your team full of fighting spirit
Weaknesses - One of your team members is hurt
Opportunities - Your opposition team seems to loose stamina
Threats - Your opposition team is full of energy
Note: Some of these factors seem to be conflicting each other. For the purpose of this step, this conflict is ignored.
Step 3 - Pair the SWOT factors to formulate Strategies
Now, you would start to formulate strategies in the four categories. Namely:-
• SO Strategies (Strengths and Opportunities Strategy)
• ST Strategies (Strengths and Threats Strategy)
• WO Strategies (Weaknesses and Opportunities Strategy)
• WT Strategies (Weaknesses and Threats Strategy)
In this case, your strength is "your team is full of fighting spirit " and paired with your opportunities is " Opposite team is losing stamina" . With this scenario, what would you do? Perhaps you formulate a strategy to " ATTACK ". There it goes, you just formulate a attacking strategy.
Then you do the same procedure for SW Strategies, WO strategies and WT strategies.
Step 4 – Evaluate the Strategic Options
At the end of this paring of SWOT factors, you would have end up several strategic options. Do a quick evaluation of each of these strategies to the extent of meeting the company objectives.
Step 5 – Selecting Strategic Options
At this step, you would have a long list of strategic options. Too many strategies to implement may not be practical. Therefore, you need to shorten the list to perhaps maximum three strategies.
After you have completed all the 5 steps to use SWOT Analysis to Formulate Strategies, you have a list of strategies for you to implement to your business.
Pros and Cons of using SWOT in Strategic Planning
You may have gained some basic understanding of SWOT Analysis. You like to start using it for your work or your personal objectives. Whichever way you do it, it will bring about a your desired outcome because the data you collected for the Four factors of S.W.O.T. is objective and relevant.
If you have put the SWOT Analysis into real life practice, you could have faced with some difficulties in using it. But don't worry too much, as more practice would gain better experience with SWOT Analysis.
In this chapter, I will point out some of the Pros and Cons of using SWOT Analysis in Strategic Planning so you are aware of it. The sample list below should help you to reinforce your understanding of the SWOT Analysis.
PROS
1) Factual data are available to understand external factors as well as internal capabilities
2) Get a chance to evaluate the external opportunities and threats
3) A factual evaluation of own strengths and Weaknesses as compared with competitors
4) Open up a new dimension of competitive position
CONS
1) Time consuming
2) Data collected may not be current (member may take past single even to make conclusion)
3) Differences in opinion due to difference understanding of the SWOT process
4) Form own opinion of an event instead of base on factual information
SWOT is an abbreviation for Strengths, Weaknesses, Opportunities and Threats
An assessment of Strengths, Weaknesses, Opportunities, and Threats. SWOT analysis is used within organizations in the early stages of strategic and marketing planning. It is also used in problem solving, decision making, or for making staff aware of the need for change. It can be used at a personal level when examining your career path or determining possible career development
SWOT analysis is an important tool for auditing the overall strategic position of a business and its environment.
Once key strategic issues have been identified, they feed into business objectives, particularly marketing objectives. SWOT analysis can be used in conjunction with other tools for audit and analysis, such as PEST analysis and Porter's Five-Forces analysis. It is also a very popular tool with business and marketing students because it is quick and easy to learn
The Key Distinction - Internal and External Issues
Strengths and weaknesses are Internal factors. For example, strength could be your specialist marketing expertise. A weakness could be the lack of a new product.
Opportunities and threats are External factors. For example, an opportunity could be a developing distribution channel such as the Internet, or changing consumer lifestyles that potentially increase demand for a company's products. A threat could be a new competitor in an important existing market or a technological change that makes existing products potentially obsolete.
It is worth pointing out that SWOT analysis can be very subjective - two people rarely come-up with the same version of a SWOT analysis even when given the same information about the same business and its environment. Accordingly, SWOT analysis is best used as a guide and not a prescription. Adding and weighting criteria to each factor increases the validity of the analysis.
Where is S.W.O.T. being applied?
S.W.O.T. Analysis as it may sometimes being called can be performed in a variety of application or situation. It can be used as a situation analysis as an input into a strategic planning process at corporate of company level. It can also apply to evaluate the situation in terms of its capabilities. We use S.W.O.T. as a situation analysis tool.
When do we Perform a S.W.O.T. Analysis?
In common practice, S.W.O.T. Analysis is performed during the Strategic Planning or Business budget session normally done at the end of a financial year. But to perform a S.W.O.T. should not be limited to a yearly affair. You may perform a S.W.O.T. Analysis whenever it is needed to help you to identify causes of a non-conformance and you needed a new solution or strategy.
Who would Perform a S.W.O.T. Analysis?
In most cases, leaders of an organization perform a S.W.O.T. Analysis. However, it should not be limited to this group of people. In fact, anyone who has an interest and trained can perform a S.W.O.T. Analysis for the situation they are in. I have many situations where heads of a department perform a S.W.O.T. Analysis for their own operation issues because they want to develop solutions based on facts.
Why do you need to Perform a S.W.O.T. Analysis?
As it can be seen by now, data gathering is an essential part of S.W.O.T. Analysis. Hence, the information collected is likely to be more factual. Any solution derived from S.W.O.T. will be more realistic and reliable.
How to Perform a S.W.O.T. Analysis?
As data collection is one of the key activities in S.W.O.T. analysis, it should allow enough time to bring back the data. 1-3 month before a S.W.O.T. Analysis session is conducted. Once the data is collected, it should be grouped into the four factors. This can be done individually or in a team.
In summary, with some basic understanding of S.W.O.T. Analysis, the solution derived from it can be value add to the organization.
How SWOT Analysis is used to formulate Strategies
This is perhaps the most powerful usage of SWOT Analysis in the Strategic Planning Process. I am going to show you how to used the four factors of SWOT to develop Strategies
By now, you would have collected several data pertaining to the Strengths, Weaknesses, Opportunities and Threats. Then you will use them to formulate strategy. Not sure how to do it? Don't worry, I take you through the steps.
Step 1 – Evaluate the Surrounding
Let's take a moment to think about both of us as the coach for two teams of football teams.
Before the game starts, you and I have certain strategies that we want the team to follow. As the game progresses, there is sign of difference between the two teams in terms of the game as well as the condition of the team members.
Step 2 – Identify the Strengths, Weaknesses, Opportunities and Threats
Now, it is time to evaluate the teams in the four factors of SWOT. Let’s take the following examples as the result of the evaluation:-
Strengths - Your team full of fighting spirit
Weaknesses - One of your team members is hurt
Opportunities - Your opposition team seems to loose stamina
Threats - Your opposition team is full of energy
Note: Some of these factors seem to be conflicting each other. For the purpose of this step, this conflict is ignored.
Step 3 - Pair the SWOT factors to formulate Strategies
Now, you would start to formulate strategies in the four categories. Namely:-
• SO Strategies (Strengths and Opportunities Strategy)
• ST Strategies (Strengths and Threats Strategy)
• WO Strategies (Weaknesses and Opportunities Strategy)
• WT Strategies (Weaknesses and Threats Strategy)
In this case, your strength is "your team is full of fighting spirit " and paired with your opportunities is " Opposite team is losing stamina" . With this scenario, what would you do? Perhaps you formulate a strategy to " ATTACK ". There it goes, you just formulate a attacking strategy.
Then you do the same procedure for SW Strategies, WO strategies and WT strategies.
Step 4 – Evaluate the Strategic Options
At the end of this paring of SWOT factors, you would have end up several strategic options. Do a quick evaluation of each of these strategies to the extent of meeting the company objectives.
Step 5 – Selecting Strategic Options
At this step, you would have a long list of strategic options. Too many strategies to implement may not be practical. Therefore, you need to shorten the list to perhaps maximum three strategies.
After you have completed all the 5 steps to use SWOT Analysis to Formulate Strategies, you have a list of strategies for you to implement to your business.
Pros and Cons of using SWOT in Strategic Planning
You may have gained some basic understanding of SWOT Analysis. You like to start using it for your work or your personal objectives. Whichever way you do it, it will bring about a your desired outcome because the data you collected for the Four factors of S.W.O.T. is objective and relevant.
If you have put the SWOT Analysis into real life practice, you could have faced with some difficulties in using it. But don't worry too much, as more practice would gain better experience with SWOT Analysis.
In this chapter, I will point out some of the Pros and Cons of using SWOT Analysis in Strategic Planning so you are aware of it. The sample list below should help you to reinforce your understanding of the SWOT Analysis.
PROS
1) Factual data are available to understand external factors as well as internal capabilities
2) Get a chance to evaluate the external opportunities and threats
3) A factual evaluation of own strengths and Weaknesses as compared with competitors
4) Open up a new dimension of competitive position
CONS
1) Time consuming
2) Data collected may not be current (member may take past single even to make conclusion)
3) Differences in opinion due to difference understanding of the SWOT process
4) Form own opinion of an event instead of base on factual information
Thursday, October 22, 2009
Environmental Scanning
Environmental Scanning
Definition
Careful monitoring of a firm's internal and external environments for detecting early signs of opportunities and threats that may influence its current and future plans.
Objectives of an Environmental Scanning System
•Detecting scientific, technical, economic, social, and political trends and events important to the institution,
•Defining the potential threats, opportunities, or changes for the institution implied by those trends and events,
•Promoting a future orientation in the thinking of management and staff, and
•Alerting management and staff to trends that are converging, diverging, speeding up, slowing down, or interacting.
Fahey and Naravanan (1986) suggest that an effective environmental scanning program should enable decision makers to understand current and potential changes taking place in their institutions' external environments. Scanning provides strategic intelligence useful in determining organizational strategies. The consequences of this activity include fostering an understanding of the effects of change on organizations, aiding in forecasting, and bringing expectations of change to bear on decision making.
Experimental Research Designs
In an attempt to control for extraneous factors, several experimental research designs have been developed, including:
•Classical pretest-post test - The total population of participants is randomly divided into two samples; the control sample, and the experimental sample. Only the experimental sample is exposed to the manipulated variable. The researcher compares the pretest results with the post test results for both samples. Any divergence between the two samples is assumed to be a result of the experiment.
•Solomon four group design - The population is randomly divided into four samples. Two of the groups are experimental samples. Two groups experience no experimental manipulation of variables. Two groups receive a pretest and a post test. Two groups receive only a post test. This is an improvement over the classical design because it controls for the effect of the pretest.
•Factorial design - this is similar to a classical design except additional samples are used. Each group is exposed to a different experimental manipulation
Advantages and Disadvantages of Experimental Research
Advantages
*Gain insight into methods of instruction
*Intuitive practice shaped by research
*Teachers have bias but can be reflective
*Researcher can have control over variables
*Humans perform experiments anyway
*Can be combined with other research methods for rigor
*Use to determine what is best for population
*Provides for greater transferability than anecdotal research
Disadvantages
*Subject to human error
*Personal bias of researcher may intrude
*Sample may not be representative
*Can produce artificial results
*Results may only apply to one situation and may be difficult to replicate
*Groups may not be comparable
*Human response can be difficult to measure
*Political pressure may skew results
Definition
Careful monitoring of a firm's internal and external environments for detecting early signs of opportunities and threats that may influence its current and future plans.
Objectives of an Environmental Scanning System
•Detecting scientific, technical, economic, social, and political trends and events important to the institution,
•Defining the potential threats, opportunities, or changes for the institution implied by those trends and events,
•Promoting a future orientation in the thinking of management and staff, and
•Alerting management and staff to trends that are converging, diverging, speeding up, slowing down, or interacting.
Fahey and Naravanan (1986) suggest that an effective environmental scanning program should enable decision makers to understand current and potential changes taking place in their institutions' external environments. Scanning provides strategic intelligence useful in determining organizational strategies. The consequences of this activity include fostering an understanding of the effects of change on organizations, aiding in forecasting, and bringing expectations of change to bear on decision making.
Experimental Research Designs
In an attempt to control for extraneous factors, several experimental research designs have been developed, including:
•Classical pretest-post test - The total population of participants is randomly divided into two samples; the control sample, and the experimental sample. Only the experimental sample is exposed to the manipulated variable. The researcher compares the pretest results with the post test results for both samples. Any divergence between the two samples is assumed to be a result of the experiment.
•Solomon four group design - The population is randomly divided into four samples. Two of the groups are experimental samples. Two groups experience no experimental manipulation of variables. Two groups receive a pretest and a post test. Two groups receive only a post test. This is an improvement over the classical design because it controls for the effect of the pretest.
•Factorial design - this is similar to a classical design except additional samples are used. Each group is exposed to a different experimental manipulation
Advantages and Disadvantages of Experimental Research
Advantages
*Gain insight into methods of instruction
*Intuitive practice shaped by research
*Teachers have bias but can be reflective
*Researcher can have control over variables
*Humans perform experiments anyway
*Can be combined with other research methods for rigor
*Use to determine what is best for population
*Provides for greater transferability than anecdotal research
Disadvantages
*Subject to human error
*Personal bias of researcher may intrude
*Sample may not be representative
*Can produce artificial results
*Results may only apply to one situation and may be difficult to replicate
*Groups may not be comparable
*Human response can be difficult to measure
*Political pressure may skew results
Observational Techniques
What is Observational Techniques?
•Observational Techniques (or field research) is a social research technique that involves the direct observation of phenomena in their natural setting.
•Observational Techniques, a form of naturalistic inquiry, allow investigation of phenomena in their naturally occurring settings.
Participant observation is where the researcher joins the population or its organisation or community setting to record behaviours, interactions or events that occur. He or she engages in the activities that s/he is studying, but the first priority is the observation. Participation is a way to get close to the action and to get a feel for what things mean to the actors. As a participant, the evaluator is in a position to gain additional insights through experiencing the phenomena for themselves. Participant observation can be used as a long or short term technique. The evaluator/researcher has to stay long enough however to immerse him /herself in the local environment and culture and to earn acceptance and trust from the regular actors.
Observation consists of observing behaviour and interactions as they occur, but seen through the eyes of the researcher. There is no attempt to participate as a member of the group or setting, although usually the evaluator has to negotiate access to the setting and the terms of research activity. The intention is to ‘melt into the background’ so that an outsider presence has no direct effect on the phenomena under study. He or she tries to observe and understand the situation ‘from the inside’.
Observational techniques share similarities with the ethnographic approach that anthropologists use in studying a culture although typically they spend a long time in the field. Aspects of the ethnographic approach are sometimes incorporated into observational methods, as for example where interest is not just in behaviours and interactions but also in features and artefacts of the physical, social and cultural setting. These are taken to embed the norms, values, procedures and rituals of the organisation and reflect the ‘taken for granted’ background of the setting which influences behaviours understandings, beliefs and attitudes of the different actors.
Another form of naturalistic inquiry that complements observational methods is conversation and discourse analysis. This qualitative method studies naturally occurring talk and conversation in institutional and non-institutional settings, and offers insights into systems of social meaning and the methods used for producing orderly social interaction. It can be a useful technique for evaluating the conversational interaction between public service agents and clients in service delivery settings.
Main Steps in Observational Techniques
Observational methods generally involve the following steps.
Step 1. Choice of situations for observation: The settings for observation are defined in advance in relation to the interests of the evaluation commissioners and other key stakeholders. They consist of settings of interaction or of negotiation between public actors and the beneficiaries of the evaluated policy. The researcher negotiates access to the sites of observation with the relevant parties (informally, in the case of participant observation).
Step 2. Observation: The observer observes the course of interaction, taking care to disturb the behaviour of the actors as little as possible. This work consists of note-taking and audio-visual recordings (as discretely as possible). The observer can take notes away from research subjects or immediately after the visit.
This step cannot be limited to simple observation but must be complemented by organisational or institutional analysis so as to identify the ways in which social, cultural and physical features of the setting impinge on relations between the actors. The observer must record as much information as possible and capture an insider view of the setting.
Step 3. Analysing the material: One approach to processing the material gathered is to analyse the events observed in terms of characteristic sequences. Each recording is ‘cut up’ just as one would edit a film into sequences.
The observer identifies the ‘evaluative assertions’, that is to say, the sentences which convey an explicit or implicit value judgement. Typical sequences and their analysis are concentrated on these assertions, and reveal the way in which the policy is judged in the field. Used in this way, the tool can shed important new light on the validity and effectiveness of the policy.
Step 4. Analysis of typical sequences with the actors. The typical sequences and assertions are rewritten or modified to make them anonymous. They are then given to representatives of the people observed, for the purpose of collecting their comments and reactions. This step serves to verify that no bias has been created by taking the sequences out of their context. It gives, for each sequence, keys for interpretation which are recognised and validated by the ‘community’ under study.
Comments about the above section – only one method is described, analysis of sequences (or conversations?). More common, general observation technique is to write notes and code them afterwards, an ethnographic method and with this it is not usually returned to subjects to verify.
Types of Observation Technique
The most frequently used types of observational techniques are:
•Personal observation
1.Observing products in use to detect usage patterns and problems
2.Observing license plates in store parking lots
3.Determining the socio-economic status of shoppers
4.Determining the level of package scrutiny
5.Determining the time it takes to make a purchase decision
•Mechanical observation
1.Eye-tracking analysis while subjects watch advertisements
(a)Oculometers - what the subject is looking at
(b)Pupilometers - how interested is the viewer
(2)Electronic checkout scanners - records purchase behavior
(3)On-site cameras in stores
(4)Nielsen box for tracking television station watching
(5)Voice pitch meters - measures emotional reactions
(6)Psychogalvanometer - measures galvanic skin response
•Audits
i)Retail audits to determine the quality of service in stores
ii)Inventory audits to determine product acceptance
iii)Shelf space audits
•Trace Analysis
i)Credit card records
ii)Computer cookie records
iii)Garbology - looking for traces of purchase patterns in garbage
iv)Detecting store traffic patterns by observing the wear in the floor (long term) or the dirt on the floor (short term)
v)Exposure to advertisements
•Content analysis
i)Observe the content of magazines, television broadcasts, radio broadcasts, or newspapers, either articles, programs, or advertisements
Strengths and Limitations of Observational Techniques
Observation is a generic method that involves the collection, interpretation and comparison of data. It shares these characteristics with the case study method. It is therefore particularly well suited to the analysis of the effects of an intervention that is innovative or unfamiliar, and especially the clarification of confounding factors that influence the apparent success or failure of the interventions evaluated.
Observational techniques serve to reveal the discrepancy between the way in which public interventions are understood high up at decision-making level, and the way in which it is understood in the field; it highlights the interpretation made of it by individuals in an operational situation.
The observation is generally limited to a small number of settings. Generalisation is therefore possible only if the intervention is sufficiently homogeneous across sites.
It is based on spontaneous or naturalistic data, gathered by an independent and experienced observer. The reliability of the observation depends to a large extent on the professional know-how of the observer-analyst. It is however possible to introduce a structured observational template that can be used by less experienced researchers, when gathering data across a large number of settings.
Despite its advantages, observation requires meticulous preparation to enable the observer to fit into the observed context without disturbing anyone [what sort of preparation?], as well as considerable time for data collection. making it an expensive method.
The technique allows data to be gathered in difficult situations where other survey techniques cannot be used.
A major strength of using observational techniques, especially those based on Grounded Theory, is that they can capture unexpected data which other methods can miss. The researcher does not define categories of data before going out into the field but is open to “what’s there” – the theory emerges from the data on the ground rather than pre-defined theory influencing what data is collected.
The extent to which the observer can be present without disturbing or influencing research subjects is never nil; it is usually recommended that observers maintain self-awareness about how they impact the environment they are researching and to take account of it in their data collection. In participant observation the researcher aims to become part of a community or environment rather than maintaining a detached status.
•Observational Techniques (or field research) is a social research technique that involves the direct observation of phenomena in their natural setting.
•Observational Techniques, a form of naturalistic inquiry, allow investigation of phenomena in their naturally occurring settings.
Participant observation is where the researcher joins the population or its organisation or community setting to record behaviours, interactions or events that occur. He or she engages in the activities that s/he is studying, but the first priority is the observation. Participation is a way to get close to the action and to get a feel for what things mean to the actors. As a participant, the evaluator is in a position to gain additional insights through experiencing the phenomena for themselves. Participant observation can be used as a long or short term technique. The evaluator/researcher has to stay long enough however to immerse him /herself in the local environment and culture and to earn acceptance and trust from the regular actors.
Observation consists of observing behaviour and interactions as they occur, but seen through the eyes of the researcher. There is no attempt to participate as a member of the group or setting, although usually the evaluator has to negotiate access to the setting and the terms of research activity. The intention is to ‘melt into the background’ so that an outsider presence has no direct effect on the phenomena under study. He or she tries to observe and understand the situation ‘from the inside’.
Observational techniques share similarities with the ethnographic approach that anthropologists use in studying a culture although typically they spend a long time in the field. Aspects of the ethnographic approach are sometimes incorporated into observational methods, as for example where interest is not just in behaviours and interactions but also in features and artefacts of the physical, social and cultural setting. These are taken to embed the norms, values, procedures and rituals of the organisation and reflect the ‘taken for granted’ background of the setting which influences behaviours understandings, beliefs and attitudes of the different actors.
Another form of naturalistic inquiry that complements observational methods is conversation and discourse analysis. This qualitative method studies naturally occurring talk and conversation in institutional and non-institutional settings, and offers insights into systems of social meaning and the methods used for producing orderly social interaction. It can be a useful technique for evaluating the conversational interaction between public service agents and clients in service delivery settings.
Main Steps in Observational Techniques
Observational methods generally involve the following steps.
Step 1. Choice of situations for observation: The settings for observation are defined in advance in relation to the interests of the evaluation commissioners and other key stakeholders. They consist of settings of interaction or of negotiation between public actors and the beneficiaries of the evaluated policy. The researcher negotiates access to the sites of observation with the relevant parties (informally, in the case of participant observation).
Step 2. Observation: The observer observes the course of interaction, taking care to disturb the behaviour of the actors as little as possible. This work consists of note-taking and audio-visual recordings (as discretely as possible). The observer can take notes away from research subjects or immediately after the visit.
This step cannot be limited to simple observation but must be complemented by organisational or institutional analysis so as to identify the ways in which social, cultural and physical features of the setting impinge on relations between the actors. The observer must record as much information as possible and capture an insider view of the setting.
Step 3. Analysing the material: One approach to processing the material gathered is to analyse the events observed in terms of characteristic sequences. Each recording is ‘cut up’ just as one would edit a film into sequences.
The observer identifies the ‘evaluative assertions’, that is to say, the sentences which convey an explicit or implicit value judgement. Typical sequences and their analysis are concentrated on these assertions, and reveal the way in which the policy is judged in the field. Used in this way, the tool can shed important new light on the validity and effectiveness of the policy.
Step 4. Analysis of typical sequences with the actors. The typical sequences and assertions are rewritten or modified to make them anonymous. They are then given to representatives of the people observed, for the purpose of collecting their comments and reactions. This step serves to verify that no bias has been created by taking the sequences out of their context. It gives, for each sequence, keys for interpretation which are recognised and validated by the ‘community’ under study.
Comments about the above section – only one method is described, analysis of sequences (or conversations?). More common, general observation technique is to write notes and code them afterwards, an ethnographic method and with this it is not usually returned to subjects to verify.
Types of Observation Technique
The most frequently used types of observational techniques are:
•Personal observation
1.Observing products in use to detect usage patterns and problems
2.Observing license plates in store parking lots
3.Determining the socio-economic status of shoppers
4.Determining the level of package scrutiny
5.Determining the time it takes to make a purchase decision
•Mechanical observation
1.Eye-tracking analysis while subjects watch advertisements
(a)Oculometers - what the subject is looking at
(b)Pupilometers - how interested is the viewer
(2)Electronic checkout scanners - records purchase behavior
(3)On-site cameras in stores
(4)Nielsen box for tracking television station watching
(5)Voice pitch meters - measures emotional reactions
(6)Psychogalvanometer - measures galvanic skin response
•Audits
i)Retail audits to determine the quality of service in stores
ii)Inventory audits to determine product acceptance
iii)Shelf space audits
•Trace Analysis
i)Credit card records
ii)Computer cookie records
iii)Garbology - looking for traces of purchase patterns in garbage
iv)Detecting store traffic patterns by observing the wear in the floor (long term) or the dirt on the floor (short term)
v)Exposure to advertisements
•Content analysis
i)Observe the content of magazines, television broadcasts, radio broadcasts, or newspapers, either articles, programs, or advertisements
Strengths and Limitations of Observational Techniques
Observation is a generic method that involves the collection, interpretation and comparison of data. It shares these characteristics with the case study method. It is therefore particularly well suited to the analysis of the effects of an intervention that is innovative or unfamiliar, and especially the clarification of confounding factors that influence the apparent success or failure of the interventions evaluated.
Observational techniques serve to reveal the discrepancy between the way in which public interventions are understood high up at decision-making level, and the way in which it is understood in the field; it highlights the interpretation made of it by individuals in an operational situation.
The observation is generally limited to a small number of settings. Generalisation is therefore possible only if the intervention is sufficiently homogeneous across sites.
It is based on spontaneous or naturalistic data, gathered by an independent and experienced observer. The reliability of the observation depends to a large extent on the professional know-how of the observer-analyst. It is however possible to introduce a structured observational template that can be used by less experienced researchers, when gathering data across a large number of settings.
Despite its advantages, observation requires meticulous preparation to enable the observer to fit into the observed context without disturbing anyone [what sort of preparation?], as well as considerable time for data collection. making it an expensive method.
The technique allows data to be gathered in difficult situations where other survey techniques cannot be used.
A major strength of using observational techniques, especially those based on Grounded Theory, is that they can capture unexpected data which other methods can miss. The researcher does not define categories of data before going out into the field but is open to “what’s there” – the theory emerges from the data on the ground rather than pre-defined theory influencing what data is collected.
The extent to which the observer can be present without disturbing or influencing research subjects is never nil; it is usually recommended that observers maintain self-awareness about how they impact the environment they are researching and to take account of it in their data collection. In participant observation the researcher aims to become part of a community or environment rather than maintaining a detached status.
Tuesday, October 20, 2009
Sampling
Sampling Ratio
This is the proportion of elements in the population that are selected (one name for every two respondent in the class).
Sampling ratio= Sample size/pop size.
Sampling Interval
This is the standard distance between elements selected in the sample population size/sample size
Sampling Methods
Sampling is a very important part of the Market Research process. If you have surveyed using an appropriate sampling technique, you can be confident that your results will be generalised to the population in question. If the sample were biased in any way, for example, if the selection technique gave older people more of a chance of selection than younger people, it would be inadvisable to make generalisations from the findings.
There are essentiality two types of sampling: probability and non-probability sampling.
• Probability Sampling Methods
Probability or random sampling gives all members of the population a known chance of being selected for inclusion in the sample and this does not depend upon previous events in the selection process. In other words, the selection of individuals does not affect the chance of anyone else in the population being selected.
Many statistical techniques assume that a sample was selected on a random basis. There are four basic types of random sampling techniques:
1) Simple Random Sampling
This is the ideal choice as it is a ‘perfect’ random method. Using this method, individuals are randomly selected from a list of the population and every single individual has an equal chance of selection.
This method is ideal, but if it cannot be adopted, one of the following alternatives may be chosen if any shortfall in accuracy.
2) Systematic Sampling
Systematic sampling is a frequently used variant of simple random sampling. When performing systematic sampling, every element from the list is selected (this is referred to as the sample interval) from a randomly selected starting point. For example, if we have a listed population of 6000 members and wish to draw a sample of 2000, we would select every 30th (6000 divided by 200) person from the list. In practice, we would randomly select a number between 1 and 30 to act as our starting point.
The one potential problem with this method of sampling concerns the arrangement of elements in the list.? If the list is arranged in any kind of order e.g. if every 30th house is smaller than the others from which the sample is being recruited, there is a possibility that the sample produced could be seriously biased.
3) Stratified Sampling
Stratified sampling is a variant on simple random and systematic methods and is used when there are a number of distinct subgroups, within each of which it is required that there is full representation. A stratified sample is constructed by classifying the population in sub-populations (or strata), base on some well-known characteristics of the population, such as age, gender or socio-economic status. The selection of elements is then made separately from within each strata, usually by random or systematic sampling methods.
Stratified sampling methods also come in two types – proportionate and disproportionate.
In proportionate sampling, the strata sample sizes are made proportional to the strata population sizes. For example if the first strata is made up of males, then as there are around 50% of males in the UK population, the male strata will need to represent around 50% of the total sample.
In disproportionate methods, the strata are not sampled according to the population sizes, but higher proportions are selected from some groups and not others. This technique is typically used in a number of distinct situations:
The costs of collecting data may differ from subgroup to subgroup.
We might require more cases in some groups if estimations of populations values are likely to be harder to make i.e. the larger the sample size (up to certain limits), the more accurate any estimations are likely to be.
We expect different response rates from different groups of people. Therefore, the less co-operative groups might be ‘over-sampled’ to compensate.
4) Cluster or Multi-stage Sampling
Cluster sampling is a frequently-used, and usually more practical, random sampling method. It is particularly useful in situations for which no list of the elements within a population is available and therefore cannot be selected directly. As this form of sampling is conducted by randomly selecting subgroups of the population, possibly in several stages, it should produce results equivalent to a simple random sample.
The sample is generally done by first sampling at the higher level(s) e.g. randomly sampled countries, then sampling from subsequent levels in turn e.g. within the selected countries sample counties, then within these postcodes, the within these households, until the final stage is reached, at which point the sampling is done in a simple random manner e.g. sampling people within the selected households. The ‘levels’ in question are defined by subgroups into which it is appropriate to subdivide your population.
Cluster samples are generally used if:
- No list of the population exists.
- Well-defined clusters, which will often be geographic areas exist.
- A reasonable estimate of the number of elements in each level of clustering can be made.
- Often the total sample size must be fairly large to enable cluster sampling to be used effectively.
•Non-probability Sampling Methods
Non-probability sampling procedures are much less desirable, as they will almost certainly contain sampling biases. Unfortunately, in some circumstances such methods are unavoidable.
In a Market Research context, the most frequently-adopted form of non-probability sampling is known as quota sampling.? In some ways this is similar to cluster sampling in that it requires the definition of key subgroups. The main difference lies in the fact that quotas (i.e. the amount of people to be surveyed) within subgroups are set beforehand (e.g. 25% 16-24 yr olds, 30% 25-34 yr olds, 20% 35-55 yr olds, and 25% 56+ yr olds) usually proportions are set to match known population distributions. Interviewers then select respondents according to these criteria rather than at random. The subjective nature of this selection means that only about a proportion of the population has a chance of being selected in a typical quota sampling strategy.
If you are forced into using a non-random method, you must be extremely careful when drawing conclusions. You should always be honest about the sampling technique used and that a non-random approach will probably mean that biases are present within the data. In order to convert the sample to be representative of the true population, you may want to use weighting techniques.
The importance of sampling should not be underestimated, as it determines to whom the results of your research will be applicable. It is important, therefore to give full consideration to the sampling strategy to be used and to select the most appropriate. Your most important consideration should be whether you could adopt a simple random sample.? If not, could one of the other random methods be used? Only when you have no choice should a non-random method be used.
All too often, researchers succumb to the temptation of generalising their results to a much broader range of people than those from whom the data was originally gathered. This is poor practice and you should always aim to adopt an appropriate sampling technique. The key is not to guess, but take some advice?
General Advantages
•Typicality of subjects is aimed for
•Permits exploration
- General Disadvantage
•Unrepresentative
Calculating a Sample Size
A frequently asked question is “How many people should I sample?” It is an extremely good question, although unfortunately there is no single answer! In general, the larger the sample size, the more closely your sample data will match that from the population. However in practice, you need to work out how many responses will give you sufficient precision at an affordable cost.
Calculation of an appropriate sample size depends upon a number of factors unique to each survey and it is down to you to make the decision regarding these factors. The three most important are:
- How accurate you wish to be
- How confident you are in the results
- What budget you have available
The temptation is to say all should be as high as possible. The problem is that an increase in either accuracy or confidence (or both) will always require a larger sample and higher budget. Therefore a compromise must be reached and you must work out the degree of inaccuracy and confidence you are prepared to accept.
There are two types of figures that you may wish to estimate in your Market Research project: values such as mean income, mean height etc. and proportions (the percentage of people who intend to vote for party X). There are slightly different sample size calculations for each:
For a mean
The required formula is: s = (z / e)2
Where:
s = the sample size
z = a number relating to the degree of confidence you wish to have in the result. 95% confidence* is most frequently used and accepted. The value of ‘z’ should be 2.58 for 99% confidence, 1.96 for 95% confidence, 1.64 for 90% confidence and 1.28 for 80% confidence.
e = the error you are prepared to accept, measured as a proportion of the standard deviation (accuracy)
For example, imagine we are estimating mean income, and wish to know what sample size to aim for in order that we can be 95% confident in the result. Assuming that we are prepared to accept an error of 10% of the population standard deviation (previous research might have shown the standard deviation of income to be 8000 and we might be prepared to accept an error of 800 (10%)), we would do the following calculation:
s = (1.96 / 0.1)2
Therefore s = 384.16
In other words, 385 people would need to be sampled to meet our criterion.
*Because we interviewed a sample and not the whole population (if we had done this we could be 100% confident in our results), we have to be prepared to be less confident and because we based our sample size calculation on the 95% confidence level, we can be confident that amongst the whole population there is a 95% chance that the mean is inside our acceptable error limit. There is of course a 5% chance that the measure is outside this limit. If we wanted to be more confident, we would base our sample size calculation on a 99% confidence level and if we were prepared to accept a lower level of confidence, we would base our calculation on the 90% confidence level.`
For a Proportion
Although we are doing the same thing here, the formula is different:
s = z2(p(1-p))
???????? e2
Where:
s = the sample size
z = the number relating to the degree of confidence you wish to have in the result
p = an estimate of the proportion of people falling into the group in which you are interested in the population
e = the proportion of error we are prepared to accept
As an example, imagine we are attempting to assess the percentage of voters who will vote for candidate X. If we assume that we wish to be 99% confident of the result i.e. z = 2.85 and that we will allow for errors in the region of +/-3% i.e. e = 0.03. But in terms of an estimate of the proportion of the population who would vote for the candidate (p), if a previous survey had been carried out, we could use the percentage from that survey as an estimate. However, if this were the first survey, we would assume that 50% (i.e. p = 0.05) of people would vote for candidate X and 50% would not. Choosing 50% will provide the most conservative estimate of sample size. If the true percentage were 10%, we will still have an accurate estimate; we will simply have sampled more people than was absolutely necessary. The reverse situation, not having enough data to make reliable estimates, is much less desirable.
In the example:
s = 2.582(0.5*0.5)
???????? 0.032
Therefore s = 1,849
This rather large sample was necessary because we wanted to be 99% sure of the result and desired and desired a very narrow (+/-3%) margin of error. It does, however reveal why many political polls tend to interview between 1,000 and 2,000 people.
This is the proportion of elements in the population that are selected (one name for every two respondent in the class).
Sampling ratio= Sample size/pop size.
Sampling Interval
This is the standard distance between elements selected in the sample population size/sample size
Sampling Methods
Sampling is a very important part of the Market Research process. If you have surveyed using an appropriate sampling technique, you can be confident that your results will be generalised to the population in question. If the sample were biased in any way, for example, if the selection technique gave older people more of a chance of selection than younger people, it would be inadvisable to make generalisations from the findings.
There are essentiality two types of sampling: probability and non-probability sampling.
• Probability Sampling Methods
Probability or random sampling gives all members of the population a known chance of being selected for inclusion in the sample and this does not depend upon previous events in the selection process. In other words, the selection of individuals does not affect the chance of anyone else in the population being selected.
Many statistical techniques assume that a sample was selected on a random basis. There are four basic types of random sampling techniques:
1) Simple Random Sampling
This is the ideal choice as it is a ‘perfect’ random method. Using this method, individuals are randomly selected from a list of the population and every single individual has an equal chance of selection.
This method is ideal, but if it cannot be adopted, one of the following alternatives may be chosen if any shortfall in accuracy.
2) Systematic Sampling
Systematic sampling is a frequently used variant of simple random sampling. When performing systematic sampling, every element from the list is selected (this is referred to as the sample interval) from a randomly selected starting point. For example, if we have a listed population of 6000 members and wish to draw a sample of 2000, we would select every 30th (6000 divided by 200) person from the list. In practice, we would randomly select a number between 1 and 30 to act as our starting point.
The one potential problem with this method of sampling concerns the arrangement of elements in the list.? If the list is arranged in any kind of order e.g. if every 30th house is smaller than the others from which the sample is being recruited, there is a possibility that the sample produced could be seriously biased.
3) Stratified Sampling
Stratified sampling is a variant on simple random and systematic methods and is used when there are a number of distinct subgroups, within each of which it is required that there is full representation. A stratified sample is constructed by classifying the population in sub-populations (or strata), base on some well-known characteristics of the population, such as age, gender or socio-economic status. The selection of elements is then made separately from within each strata, usually by random or systematic sampling methods.
Stratified sampling methods also come in two types – proportionate and disproportionate.
In proportionate sampling, the strata sample sizes are made proportional to the strata population sizes. For example if the first strata is made up of males, then as there are around 50% of males in the UK population, the male strata will need to represent around 50% of the total sample.
In disproportionate methods, the strata are not sampled according to the population sizes, but higher proportions are selected from some groups and not others. This technique is typically used in a number of distinct situations:
The costs of collecting data may differ from subgroup to subgroup.
We might require more cases in some groups if estimations of populations values are likely to be harder to make i.e. the larger the sample size (up to certain limits), the more accurate any estimations are likely to be.
We expect different response rates from different groups of people. Therefore, the less co-operative groups might be ‘over-sampled’ to compensate.
4) Cluster or Multi-stage Sampling
Cluster sampling is a frequently-used, and usually more practical, random sampling method. It is particularly useful in situations for which no list of the elements within a population is available and therefore cannot be selected directly. As this form of sampling is conducted by randomly selecting subgroups of the population, possibly in several stages, it should produce results equivalent to a simple random sample.
The sample is generally done by first sampling at the higher level(s) e.g. randomly sampled countries, then sampling from subsequent levels in turn e.g. within the selected countries sample counties, then within these postcodes, the within these households, until the final stage is reached, at which point the sampling is done in a simple random manner e.g. sampling people within the selected households. The ‘levels’ in question are defined by subgroups into which it is appropriate to subdivide your population.
Cluster samples are generally used if:
- No list of the population exists.
- Well-defined clusters, which will often be geographic areas exist.
- A reasonable estimate of the number of elements in each level of clustering can be made.
- Often the total sample size must be fairly large to enable cluster sampling to be used effectively.
•Non-probability Sampling Methods
Non-probability sampling procedures are much less desirable, as they will almost certainly contain sampling biases. Unfortunately, in some circumstances such methods are unavoidable.
In a Market Research context, the most frequently-adopted form of non-probability sampling is known as quota sampling.? In some ways this is similar to cluster sampling in that it requires the definition of key subgroups. The main difference lies in the fact that quotas (i.e. the amount of people to be surveyed) within subgroups are set beforehand (e.g. 25% 16-24 yr olds, 30% 25-34 yr olds, 20% 35-55 yr olds, and 25% 56+ yr olds) usually proportions are set to match known population distributions. Interviewers then select respondents according to these criteria rather than at random. The subjective nature of this selection means that only about a proportion of the population has a chance of being selected in a typical quota sampling strategy.
If you are forced into using a non-random method, you must be extremely careful when drawing conclusions. You should always be honest about the sampling technique used and that a non-random approach will probably mean that biases are present within the data. In order to convert the sample to be representative of the true population, you may want to use weighting techniques.
The importance of sampling should not be underestimated, as it determines to whom the results of your research will be applicable. It is important, therefore to give full consideration to the sampling strategy to be used and to select the most appropriate. Your most important consideration should be whether you could adopt a simple random sample.? If not, could one of the other random methods be used? Only when you have no choice should a non-random method be used.
All too often, researchers succumb to the temptation of generalising their results to a much broader range of people than those from whom the data was originally gathered. This is poor practice and you should always aim to adopt an appropriate sampling technique. The key is not to guess, but take some advice?
General Advantages
•Typicality of subjects is aimed for
•Permits exploration
- General Disadvantage
•Unrepresentative
Calculating a Sample Size
A frequently asked question is “How many people should I sample?” It is an extremely good question, although unfortunately there is no single answer! In general, the larger the sample size, the more closely your sample data will match that from the population. However in practice, you need to work out how many responses will give you sufficient precision at an affordable cost.
Calculation of an appropriate sample size depends upon a number of factors unique to each survey and it is down to you to make the decision regarding these factors. The three most important are:
- How accurate you wish to be
- How confident you are in the results
- What budget you have available
The temptation is to say all should be as high as possible. The problem is that an increase in either accuracy or confidence (or both) will always require a larger sample and higher budget. Therefore a compromise must be reached and you must work out the degree of inaccuracy and confidence you are prepared to accept.
There are two types of figures that you may wish to estimate in your Market Research project: values such as mean income, mean height etc. and proportions (the percentage of people who intend to vote for party X). There are slightly different sample size calculations for each:
For a mean
The required formula is: s = (z / e)2
Where:
s = the sample size
z = a number relating to the degree of confidence you wish to have in the result. 95% confidence* is most frequently used and accepted. The value of ‘z’ should be 2.58 for 99% confidence, 1.96 for 95% confidence, 1.64 for 90% confidence and 1.28 for 80% confidence.
e = the error you are prepared to accept, measured as a proportion of the standard deviation (accuracy)
For example, imagine we are estimating mean income, and wish to know what sample size to aim for in order that we can be 95% confident in the result. Assuming that we are prepared to accept an error of 10% of the population standard deviation (previous research might have shown the standard deviation of income to be 8000 and we might be prepared to accept an error of 800 (10%)), we would do the following calculation:
s = (1.96 / 0.1)2
Therefore s = 384.16
In other words, 385 people would need to be sampled to meet our criterion.
*Because we interviewed a sample and not the whole population (if we had done this we could be 100% confident in our results), we have to be prepared to be less confident and because we based our sample size calculation on the 95% confidence level, we can be confident that amongst the whole population there is a 95% chance that the mean is inside our acceptable error limit. There is of course a 5% chance that the measure is outside this limit. If we wanted to be more confident, we would base our sample size calculation on a 99% confidence level and if we were prepared to accept a lower level of confidence, we would base our calculation on the 90% confidence level.`
For a Proportion
Although we are doing the same thing here, the formula is different:
s = z2(p(1-p))
???????? e2
Where:
s = the sample size
z = the number relating to the degree of confidence you wish to have in the result
p = an estimate of the proportion of people falling into the group in which you are interested in the population
e = the proportion of error we are prepared to accept
As an example, imagine we are attempting to assess the percentage of voters who will vote for candidate X. If we assume that we wish to be 99% confident of the result i.e. z = 2.85 and that we will allow for errors in the region of +/-3% i.e. e = 0.03. But in terms of an estimate of the proportion of the population who would vote for the candidate (p), if a previous survey had been carried out, we could use the percentage from that survey as an estimate. However, if this were the first survey, we would assume that 50% (i.e. p = 0.05) of people would vote for candidate X and 50% would not. Choosing 50% will provide the most conservative estimate of sample size. If the true percentage were 10%, we will still have an accurate estimate; we will simply have sampled more people than was absolutely necessary. The reverse situation, not having enough data to make reliable estimates, is much less desirable.
In the example:
s = 2.582(0.5*0.5)
???????? 0.032
Therefore s = 1,849
This rather large sample was necessary because we wanted to be 99% sure of the result and desired and desired a very narrow (+/-3%) margin of error. It does, however reveal why many political polls tend to interview between 1,000 and 2,000 people.
Non- Sampling Error
What is Non- Sampling Error?
Definition
Any error affecting a survey or census estimate apart from sampling error
Occurs in complete censuses as well as in sample surveys
Types of Non- Sampling Error
•Non-Response Error
•Response Error
•Processing Error
•Coverage Error
Definition
Any error affecting a survey or census estimate apart from sampling error
Occurs in complete censuses as well as in sample surveys
Types of Non- Sampling Error
•Non-Response Error
•Response Error
•Processing Error
•Coverage Error
Standard Error (SE)
Definition
A measure of the variability of an estimate due to sampling
Depends on variability in the population and sample size
Foundational measure
Margin of Error (MOE)
Definition
A measure of the precision of an estimate at a given level of confidence (90%, 95%, 99%)
Confidence level of a MOE
MOEs at the 90% confidence level for all published ACS estimates
Margin of Error (MOE)
Confidence Interval
Definition
A range that is expected to contain the population value of the characteristic with a known probability.
Formula
Where
LCL is the lower bound at the desired confidence level
UCL is the upper bound at the desired confidence level
is the ACS estimate and
is the margin of error at the desired confidence level
Confidence Interval computation
Coefficient of Variation (CV)
Definition
The relative amount of sampling error associated with a sample estimate
Sampling Error is related to Sample Size.
.The larger the sample size, the smaller the uncertainty or sampling error
•Combining ACS data from multiple years increases sample size and reduces sampling error
•All sample surveys have sampling error – including decennial census long-form data
How to Use Measures Associated With Sampling Error
How are Measures of Sampling Error Used?
•To indicate the statistical reliability and usability of estimates
•To make comparisons between estimates
•To conduct tests of statistical significance
•To help users draw appropriate conclusions about data
Test of Statistical Significance
Definition
A test to determine if it is unlikely that something has occurred by chance
A “statistically significant difference” means there is statistical evidence that there is a difference
A measure of the variability of an estimate due to sampling
Depends on variability in the population and sample size
Foundational measure
Margin of Error (MOE)
Definition
A measure of the precision of an estimate at a given level of confidence (90%, 95%, 99%)
Confidence level of a MOE
MOEs at the 90% confidence level for all published ACS estimates
Margin of Error (MOE)
Confidence Interval
Definition
A range that is expected to contain the population value of the characteristic with a known probability.
Formula
Where
LCL is the lower bound at the desired confidence level
UCL is the upper bound at the desired confidence level
is the ACS estimate and
is the margin of error at the desired confidence level
Confidence Interval computation
Coefficient of Variation (CV)
Definition
The relative amount of sampling error associated with a sample estimate
Sampling Error is related to Sample Size.
.The larger the sample size, the smaller the uncertainty or sampling error
•Combining ACS data from multiple years increases sample size and reduces sampling error
•All sample surveys have sampling error – including decennial census long-form data
How to Use Measures Associated With Sampling Error
How are Measures of Sampling Error Used?
•To indicate the statistical reliability and usability of estimates
•To make comparisons between estimates
•To conduct tests of statistical significance
•To help users draw appropriate conclusions about data
Test of Statistical Significance
Definition
A test to determine if it is unlikely that something has occurred by chance
A “statistically significant difference” means there is statistical evidence that there is a difference
Definition of Sampling Error
What is Sampling Error?
Definition
The uncertainty associated with an estimate that is based on data gathered from a sample of the population rather than the full population
Calculating a Sampling Error
In estimating the accuracy of a sample (sampling error), or selecting a sample to meet a required level of accuracy, there are two critical variables; the size of the sample and the measure being taken which for simplicity we shall take as a single percentage e.g. the percentage aware of a brand. A common mistake about sample size is to assume that accuracy is determined by the proportion of a population included in a sample (e.g. 10% of a population). Assuming a large population, this is not the case and what matters is the absolute size of the sample regardless of the size of the population – a sample of 500 drawn from a population of one million will be as accurate as a sample of 500 from a population of five million (assuming both are truly random samples of the respective populations).
Where:
e = sampling error (the proportion of error we are prepared to accept)
s = the sample size
z = the number relating to the degree of confidence you wish to have in the result
p = an estimate of the proportion of people falling into the group in which you are interested in the population
By applying the formula it can be calculated, for example, that from a sample of 500 respondents (s), a measure of 20% aware of a brand (p), will have a sample error of +/-3.5% at the 95% confidence level.
e = 1.96√(20(80))
??????????? √ 500
This means, therefore, that based on a sample of 500 we can be 95% sure that the true measure (e.g. of brand awareness) among the whole population from which the sample was drawn will be within +/-3.5% of 20% i.e. between 16.5% and 23.5%
The relationship between sampling error, a percentage measure and a sample size can be expressed as a formula.
e = z√(p%(100-p%))
?????????????? √ s
Measures Associated with Sampling Error
Measures Associated with Sampling Error includes:
•Standard Error (SE)
•Margin of Error (MOE)
•Confidence Interval (CI)
•Coefficient of Variation (CV)
Definition
The uncertainty associated with an estimate that is based on data gathered from a sample of the population rather than the full population
Calculating a Sampling Error
In estimating the accuracy of a sample (sampling error), or selecting a sample to meet a required level of accuracy, there are two critical variables; the size of the sample and the measure being taken which for simplicity we shall take as a single percentage e.g. the percentage aware of a brand. A common mistake about sample size is to assume that accuracy is determined by the proportion of a population included in a sample (e.g. 10% of a population). Assuming a large population, this is not the case and what matters is the absolute size of the sample regardless of the size of the population – a sample of 500 drawn from a population of one million will be as accurate as a sample of 500 from a population of five million (assuming both are truly random samples of the respective populations).
Where:
e = sampling error (the proportion of error we are prepared to accept)
s = the sample size
z = the number relating to the degree of confidence you wish to have in the result
p = an estimate of the proportion of people falling into the group in which you are interested in the population
By applying the formula it can be calculated, for example, that from a sample of 500 respondents (s), a measure of 20% aware of a brand (p), will have a sample error of +/-3.5% at the 95% confidence level.
e = 1.96√(20(80))
??????????? √ 500
This means, therefore, that based on a sample of 500 we can be 95% sure that the true measure (e.g. of brand awareness) among the whole population from which the sample was drawn will be within +/-3.5% of 20% i.e. between 16.5% and 23.5%
The relationship between sampling error, a percentage measure and a sample size can be expressed as a formula.
e = z√(p%(100-p%))
?????????????? √ s
Measures Associated with Sampling Error
Measures Associated with Sampling Error includes:
•Standard Error (SE)
•Margin of Error (MOE)
•Confidence Interval (CI)
•Coefficient of Variation (CV)
Sampling
What is sampling?
•Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by studying the sample we may fairly generalize our results back to the population from which they were chosen.
•Sampling is the act, process, or technique of selecting a suitable sample, or a representative part of a population for the purpose of determining parameters or characteristics of the whole population
Process of Sampling
The sampling process comprises several stages:
•Defining the population of concern
•Specifying a sampling frame, a set of items or events possible to measure
•Specifying a sampling method for selecting items or events from the frame
•Determining the sample size
•Implementing the sampling plan
•Sampling and data collecting
•Reviewing the sampling process
•Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that by studying the sample we may fairly generalize our results back to the population from which they were chosen.
•Sampling is the act, process, or technique of selecting a suitable sample, or a representative part of a population for the purpose of determining parameters or characteristics of the whole population
Process of Sampling
The sampling process comprises several stages:
•Defining the population of concern
•Specifying a sampling frame, a set of items or events possible to measure
•Specifying a sampling method for selecting items or events from the frame
•Determining the sample size
•Implementing the sampling plan
•Sampling and data collecting
•Reviewing the sampling process
Sunday, October 18, 2009
Question Wording
The wording of a question is extremely important. Researchers strive for objectivity in surveys and, therefore, must be careful not to lead the respondent into giving a desired answer. Unfortunately, the effects of question wording are one of the least understood areas of questionnaire research.
Many investigators have confirmed that slight changes in the way questions are worded can have a significant impact on how people respond. Several authors have reported that minor changes in question wording can produce more than a 25 percent difference in people's opinions.
Several investigators have looked at the effects of modifying adjectives and adverbs. Words like usually, often, sometimes, occasionally, seldom, and rarely are "commonly" used in questionnaires, although it is clear that they do not mean the same thing to all people. Some adjectives have high variability and others have low variability. The following adjectives have highly variable meanings and should be avoided in surveys: a clear mandate, most, numerous, a substantial majority, a minority of, a large proportion of, a significant number of, many, a considerable number of, and several. Other adjectives produce less variability and generally have more shared meaning. These are: lots, almost all, virtually all, nearly all, a majority of, a consensus of, a small number of, not very many of, almost none, hardly any, a couple, and a few.
The Length of a Questionnaire
As a general rule, long questionnaires get less response than short questionnaires. However, some studies have shown that the length of a questionnaire does not necessarily affect response. More important than length is question content. A subject is more likely to respond if they are involved and interested in the research topic. Questions should be meaningful and interesting to the respondent.
Anonymity and Confidentiality
An anonymous study is one in which nobody (not even the researcher) can identify who provided data. It is difficult to conduct an anonymous questionnaire through the mail because of the need to follow-up on non responders. The only way to do a follow-up is to mail another survey or reminder postcard to the entire sample. However, it is possible to guarantee confidentiality, where those conducting the study promise not to reveal the information to anyone. For the purpose of follow-up, identifying numbers on questionnaires are generally preferred to using respondents' names. It is important, however, to explain why the number is there and what it will be used for.
Some studies have shown that response rate is affected by the anonymity/confidentiality policy of a study. Others have reported that responses became more distorted when subjects felt threatened that their identities would become known. Others have found that anonymity and confidentiality issues do not affect response rates or responses.
Qualities of a Good Question
There are good and bad questions. The qualities of a good question are as follows:
•Evokes the truth. Questions must be non-threatening. When a respondent is concerned about the consequences of answering a question in a particular manner, there is a good possibility that the answer will not be truthful. Anonymous questionnaires that contain no identifying information are more likely to produce honest responses than those identifying the respondent. If your questionnaire does contain sensitive items, be sure to clearly state your policy on confidentiality
•Asks for an answer on only one dimension. The purpose of a survey is to find out information. A question that asks for a response on more than one dimension will not provide the information you are seeking. For example, a researcher investigating a new food snack asks "Do you like the texture and flavor of the snack?" If a respondent answers "no", then the researcher will not know if the respondent dislikes the texture or the flavor, or both. Another questionnaire asks, "Were you satisfied with the quality of our food and service?" Again, if the respondent answers "no", there is no way to know whether the quality of the food, service, or both were unsatisfactory. A good question asks for only one "bit" of information.
•Can accommodate all possible answers. Multiple choice items are the most popular type of survey questions because they are generally the easiest for a respondent to answer and the easiest to analyze. Asking a question that does not accommodate all possible responses can confuse and frustrate the respondent. For example, consider the question:
What brand of computer do you own? __
A. IBM PC
B. Apple
Clearly, there are many problems with this question. What if the respondent doesn't own a microcomputer? What if he owns a different brand of computer? What if he owns both an IBM PC and an Apple? There are two ways to correct this kind of problem.
The first way is to make each response a separate dichotomous item on the questionnaire. For example:
Do you own an IBM PC? (circle: Yes or No)
Do you own an Apple computer? (circle: Yes or No)
Another way to correct the problem is to add the necessary response categories and allow multiple responses. This is the preferable method because it provides more information than the previous method.
What brand of computer do you own?
(Check all that apply)
Do not own a computer
IBM PC
Apple
Other
•Has mutually exclusive options. A good question leaves no ambiguity in the mind of the respondent. There should be only one correct or appropriate choice for the respondent to make. An obvious example is:
Where did you grow up? __
A. country
B. farm
C. city
A person who grew up on a farm in the country would not know whether to select choice A or B. This question would not provide meaningful information. Worse than that, it could frustrate the respondent and the questionnaire might find its way to the trash
•Produces variability of responses. When a question produces no variability in responses, we are left with considerable uncertainty about why we asked the question and what we learned from the information. If a question does not produce variability in responses, it will not be possible to perform any statistical analyses on the item. For example
What do you think about this report? __
A. It's the worst report I've read
B. It's somewhere between the worst and best
C. It's the best report I've read
Since almost all responses would be choice B, very little information is learned. Design your questions so they are sensitive to differences between respondents. As another example:
Are you against drug abuse? (circle: Yes or No)
Again, there would be very little variability in responses and we'd be left wondering why we asked the question in the first place.
•Follows comfortably from the previous question. Writing a questionnaire is similar to writing anything else. Transitions between questions should be smooth. Grouping questions that are similar will make the questionnaire easier to complete, and the respondent will feel more comfortable. Questionnaires that jump from one unrelated topic to another feel disjointed and are not likely to produce high response rates.
•Does not presuppose a certain state of affairs. Among the most subtle mistakes in questionnaire design are questions that make an unwarranted assumption. An example of this type of mistake is:
Are you satisfied with your current auto insurance? (Yes or No)
This question will present a problem for someone who does not currently have auto insurance. Write your questions so they apply to everyone. This often means simply adding an additional response category.
Are you satisfied with your current auto insurance?
___ Yes
___No
__ Don't have auto insurance
One of the most common mistaken assumptions is that the respondent knows the correct answer to the question. Industry surveys often contain very specific questions that the respondent may not know the answer to. For example:
What percent of your budget do you spend on direct mail advertising?
Very few people would know the answer to this question without looking it up, and very few respondents will take the time and effort to look it up. If you ask a question similar to this, it is important to understand that the responses are rough estimates and there is a strong likelihood of error.
It is important to look at each question and decide if all respondents will be able to answer it. Be careful not to assume anything. For example, the following question assumes the respondent knows what Proposition 13 is about.
Are you in favor of Proposition 13?
___ Yes
___ No
___ Undecided
If there is any possibility that the respondent may not know the answer to your question, include a "don't know" response category
•Does not imply a desired answer. The wording of a question is extremely important. We are striving for objectivity in our surveys and, therefore, must be careful not to lead the respondent into giving the answer we would like to receive. Leading questions are usually easily spotted because they use negative phraseology. As examples:
Wouldn't you like to receive our free brochure?
Don't you think the Congress is spending too much money?
•Does not use emotionally loaded or vaguely defined words. This is one of the areas overlooked by both beginners and experienced researchers. Quantifying adjectives (e.g., most, least, majority) are frequently used in questions. It is important to understand that these adjectives mean different things to different people.
•Does not use unfamiliar words or abbreviations. Remember who your audience is and write your questionnaire for them. Do not use uncommon words or compound sentences. Write short sentences. Abbreviations are okay if you are absolutely certain that every single respondent will understand their meanings. If there is any doubt at all, do not use the abbreviation. The following question might be okay if all the respondents are accountants, but it would not be a good question for the general public.
What was your AGI last year? ______
.Is not dependent on responses to previous questions. Branching in written questionnaires should be avoided. While branching can be used as an effective probing technique in telephone and face-to-face interviews, it should not be used in written questionnaires because it sometimes confuses respondents. An example of branching is:
1. Do you currently have a life insurance policy? (Yes or No) If no, go to question 3
2. How much is your annual life insurance premium? _________
These questions could easily be rewritten as one question that applies to everyone:
1. How much did you spend last year for life insurance? ______
•Does not ask the respondent to order or rank a series of more than five items. Questions asking respondents to rank items by importance should be avoided. This becomes increasingly difficult as the number of items increases, and the answers become less reliable. This becomes especially problematic when asking respondents to assign a percentage to a series of items. In order to successfully complete this task, the respondent must mentally continue to re-adjust his answers until they total one hundred percent. Limiting the number of items to five will make it easier for the respondent to answer.
Many investigators have confirmed that slight changes in the way questions are worded can have a significant impact on how people respond. Several authors have reported that minor changes in question wording can produce more than a 25 percent difference in people's opinions.
Several investigators have looked at the effects of modifying adjectives and adverbs. Words like usually, often, sometimes, occasionally, seldom, and rarely are "commonly" used in questionnaires, although it is clear that they do not mean the same thing to all people. Some adjectives have high variability and others have low variability. The following adjectives have highly variable meanings and should be avoided in surveys: a clear mandate, most, numerous, a substantial majority, a minority of, a large proportion of, a significant number of, many, a considerable number of, and several. Other adjectives produce less variability and generally have more shared meaning. These are: lots, almost all, virtually all, nearly all, a majority of, a consensus of, a small number of, not very many of, almost none, hardly any, a couple, and a few.
The Length of a Questionnaire
As a general rule, long questionnaires get less response than short questionnaires. However, some studies have shown that the length of a questionnaire does not necessarily affect response. More important than length is question content. A subject is more likely to respond if they are involved and interested in the research topic. Questions should be meaningful and interesting to the respondent.
Anonymity and Confidentiality
An anonymous study is one in which nobody (not even the researcher) can identify who provided data. It is difficult to conduct an anonymous questionnaire through the mail because of the need to follow-up on non responders. The only way to do a follow-up is to mail another survey or reminder postcard to the entire sample. However, it is possible to guarantee confidentiality, where those conducting the study promise not to reveal the information to anyone. For the purpose of follow-up, identifying numbers on questionnaires are generally preferred to using respondents' names. It is important, however, to explain why the number is there and what it will be used for.
Some studies have shown that response rate is affected by the anonymity/confidentiality policy of a study. Others have reported that responses became more distorted when subjects felt threatened that their identities would become known. Others have found that anonymity and confidentiality issues do not affect response rates or responses.
Qualities of a Good Question
There are good and bad questions. The qualities of a good question are as follows:
•Evokes the truth. Questions must be non-threatening. When a respondent is concerned about the consequences of answering a question in a particular manner, there is a good possibility that the answer will not be truthful. Anonymous questionnaires that contain no identifying information are more likely to produce honest responses than those identifying the respondent. If your questionnaire does contain sensitive items, be sure to clearly state your policy on confidentiality
•Asks for an answer on only one dimension. The purpose of a survey is to find out information. A question that asks for a response on more than one dimension will not provide the information you are seeking. For example, a researcher investigating a new food snack asks "Do you like the texture and flavor of the snack?" If a respondent answers "no", then the researcher will not know if the respondent dislikes the texture or the flavor, or both. Another questionnaire asks, "Were you satisfied with the quality of our food and service?" Again, if the respondent answers "no", there is no way to know whether the quality of the food, service, or both were unsatisfactory. A good question asks for only one "bit" of information.
•Can accommodate all possible answers. Multiple choice items are the most popular type of survey questions because they are generally the easiest for a respondent to answer and the easiest to analyze. Asking a question that does not accommodate all possible responses can confuse and frustrate the respondent. For example, consider the question:
What brand of computer do you own? __
A. IBM PC
B. Apple
Clearly, there are many problems with this question. What if the respondent doesn't own a microcomputer? What if he owns a different brand of computer? What if he owns both an IBM PC and an Apple? There are two ways to correct this kind of problem.
The first way is to make each response a separate dichotomous item on the questionnaire. For example:
Do you own an IBM PC? (circle: Yes or No)
Do you own an Apple computer? (circle: Yes or No)
Another way to correct the problem is to add the necessary response categories and allow multiple responses. This is the preferable method because it provides more information than the previous method.
What brand of computer do you own?
(Check all that apply)
Do not own a computer
IBM PC
Apple
Other
•Has mutually exclusive options. A good question leaves no ambiguity in the mind of the respondent. There should be only one correct or appropriate choice for the respondent to make. An obvious example is:
Where did you grow up? __
A. country
B. farm
C. city
A person who grew up on a farm in the country would not know whether to select choice A or B. This question would not provide meaningful information. Worse than that, it could frustrate the respondent and the questionnaire might find its way to the trash
•Produces variability of responses. When a question produces no variability in responses, we are left with considerable uncertainty about why we asked the question and what we learned from the information. If a question does not produce variability in responses, it will not be possible to perform any statistical analyses on the item. For example
What do you think about this report? __
A. It's the worst report I've read
B. It's somewhere between the worst and best
C. It's the best report I've read
Since almost all responses would be choice B, very little information is learned. Design your questions so they are sensitive to differences between respondents. As another example:
Are you against drug abuse? (circle: Yes or No)
Again, there would be very little variability in responses and we'd be left wondering why we asked the question in the first place.
•Follows comfortably from the previous question. Writing a questionnaire is similar to writing anything else. Transitions between questions should be smooth. Grouping questions that are similar will make the questionnaire easier to complete, and the respondent will feel more comfortable. Questionnaires that jump from one unrelated topic to another feel disjointed and are not likely to produce high response rates.
•Does not presuppose a certain state of affairs. Among the most subtle mistakes in questionnaire design are questions that make an unwarranted assumption. An example of this type of mistake is:
Are you satisfied with your current auto insurance? (Yes or No)
This question will present a problem for someone who does not currently have auto insurance. Write your questions so they apply to everyone. This often means simply adding an additional response category.
Are you satisfied with your current auto insurance?
___ Yes
___No
__ Don't have auto insurance
One of the most common mistaken assumptions is that the respondent knows the correct answer to the question. Industry surveys often contain very specific questions that the respondent may not know the answer to. For example:
What percent of your budget do you spend on direct mail advertising?
Very few people would know the answer to this question without looking it up, and very few respondents will take the time and effort to look it up. If you ask a question similar to this, it is important to understand that the responses are rough estimates and there is a strong likelihood of error.
It is important to look at each question and decide if all respondents will be able to answer it. Be careful not to assume anything. For example, the following question assumes the respondent knows what Proposition 13 is about.
Are you in favor of Proposition 13?
___ Yes
___ No
___ Undecided
If there is any possibility that the respondent may not know the answer to your question, include a "don't know" response category
•Does not imply a desired answer. The wording of a question is extremely important. We are striving for objectivity in our surveys and, therefore, must be careful not to lead the respondent into giving the answer we would like to receive. Leading questions are usually easily spotted because they use negative phraseology. As examples:
Wouldn't you like to receive our free brochure?
Don't you think the Congress is spending too much money?
•Does not use emotionally loaded or vaguely defined words. This is one of the areas overlooked by both beginners and experienced researchers. Quantifying adjectives (e.g., most, least, majority) are frequently used in questions. It is important to understand that these adjectives mean different things to different people.
•Does not use unfamiliar words or abbreviations. Remember who your audience is and write your questionnaire for them. Do not use uncommon words or compound sentences. Write short sentences. Abbreviations are okay if you are absolutely certain that every single respondent will understand their meanings. If there is any doubt at all, do not use the abbreviation. The following question might be okay if all the respondents are accountants, but it would not be a good question for the general public.
What was your AGI last year? ______
.Is not dependent on responses to previous questions. Branching in written questionnaires should be avoided. While branching can be used as an effective probing technique in telephone and face-to-face interviews, it should not be used in written questionnaires because it sometimes confuses respondents. An example of branching is:
1. Do you currently have a life insurance policy? (Yes or No) If no, go to question 3
2. How much is your annual life insurance premium? _________
These questions could easily be rewritten as one question that applies to everyone:
1. How much did you spend last year for life insurance? ______
•Does not ask the respondent to order or rank a series of more than five items. Questions asking respondents to rank items by importance should be avoided. This becomes increasingly difficult as the number of items increases, and the answers become less reliable. This becomes especially problematic when asking respondents to assign a percentage to a series of items. In order to successfully complete this task, the respondent must mentally continue to re-adjust his answers until they total one hundred percent. Limiting the number of items to five will make it easier for the respondent to answer.
Subscribe to:
Posts (Atom)