Pcm In Textiles

Phase Change Materials (PCM) in Textiles
In textile industry, protection from extreme environmental conditions is a very crucial requirement. Clothing that protects us from water, extreme cold, intensive heat, open fire, high voltage, propelled bullets, toxic chemicals, nuclear radiations, biological toxins, etc are some of the illustrations.

Such clothing is utilized as sportswear, defense wear, firefighting wear, bulletproof jackets and other professional wear. Textile products can be made more comfortable when the properties of the textile materials can adjust with all types of environments.

At present, for fulfilling the above requirement Phase Change Materials (PCM) is one such intelligent material. It absorbs, stores or discharges heat in accordance with the various changes in temperature and is more often applied to manufacture the smart textiles.

Phase Change Materials
‘Phase Change’ is the process of going from one stat to another, e.g. from solid to liquid. Any material that experiences the process of phase change is named as Phase Change Materials (PCM).

Such materials collect, discharge or absorb heat as they oscillate between solid and liquid form. They discharge heat as they transform to a solid state and absorb as they go back to a liquid state. There are three basic phases of matter solid, liquid and gas, but others like crystalline, colloid, glassy, amorphous and plasma phases are also considered to exist.

This fundamental phenomenon of science was initially developed and used for building space suits for astronauts for the US Space Program. These suits kept the astronauts warm in the black void of space and cool in the solar glare. Phase Change Materials are compounds, which melt and solidify at specific temperatures and correspondingly are able to retain or discharge large amounts of energy.

The storage of thermal energy by changing the phase of a material at a constant temperature is classified as ‘latent heat’, i.e., changing from a liquid state to a solid state. When a PCM experiences a phase change, a huge amount of energy is needed. The most significant characteristic of latent heat is that it involves the transfer of much larger amounts of energy than sensible heat transfer.

Quiet a few of these PCMs change phases within a temperature range just above and below human skin temperature. This characteristic of some substances is used for making protective all-season outfits, and for abruptly changing environment. Fibre, fabric and foam with built-in PCMs store the warmth of body and then release it back to the body, as the body requires it. Since the procedure of phase change is dynamic, the materials are continually shifting from solid to liquid and back according to the physical movement of the body and outside temperature. Furthermore, Phase Change Materials are used, but they never get used up.

Phase Change Materials are waxes that have the distinctive capacity to soak and emit heat energy without altering the temperature. These waxes include eicosane, octadecane, Nonadecane, heptadecane and hexadecane. They all possess different freezing and melting points and when mixed in a microcapsule it will accumulate heat energy and release heat energy and maintain their temperature range of 30-34°C, which is very comfortable for the body.

The amount of heat absorbed by a PCM in the actual phase change with the amount of heat absorbed in an ordinary heating procedure can be evaluated by taking water as a PCM. The melting of ice into water leads to the absorption of latent heat of nearly 335 J/g. If water is further boiled, a sensible heat of only 4 J/g is absorbed, while the temperature increases by one degree. Hence, the latent heat absorption in the phase change from ice into water is about 100 times greater than the sensible heat absorption.

How to assimilate PCMs in fabrics?
The micro encapsulated PCM can be combined with woven, non woven or knitted fabrics.

The capsules can be added to the fabric in various ways such as:

Microcapsules: Microcapsules of various shapes – round, square and triangular within fibres at the polymer stage. The PCM microcapsules are permanently fixed within the fibre structure during the wet spinning procedure of fibre manufacture. Micro encapsulation gives a softer hand, greater stretch, more breathability and air permeability to the fabrics.

Matrix coating during the finishing process: The PCM microcapsules are embedded in a coating compound like acrylic, polyurethane, etc, and are applied to the fabric. There are many coating methods available like knife-over-roll, knife-over-air, pad-dry-cure, gravure, dip coating and transfer coating.

Foam dispersion: Microcapsules are mixed into a water-blown polyurethane foam mix and these foams are applied to a fabric in a lamination procedure, where the water is removed from the system by the drying process.

Body and clothing systems
The needed thermal insulation of clothing systems mainly depends on the physical activity and on the surrounding conditions such as temperature and relative humidity. The amount of heat produced by humans depends a lot on the physical activity and can differ from 100W while resting to over 1000W during maximum physical performance.

Specially, during the cooler seasons (approx 0°C), the suggested thermal insulation is defined in order to make sure that the body is adequately warm when resting. At extreme activity, which is often a case with winter sports, the body temperature rises with enhanced heat production. To make this increase within a certain limit, the body perspires in order to withdraw energy from the body by evaporative cooling. If the thermal insulation of the clothing is decreased during physical activity, a part of the generated heat can be removed by convection, thus the body is not needed expected to perspire so much.

The quality of insulation in a garment in terms of heat and cold will be widely managed by the thickness and density of its component fabrics. High thickness and low density make insulation better. It is observed in many cases that thermal insulation is offered by air gaps between the garment layers.

However, the external temperature also influences the effectiveness of the insulation. The more extreme the temperature, be it very high or very low, the less effective the insulation becomes. Thus, a garment designed for its capability to protect against heat or cold is chosen by its wearer on the expectation of the climate in which the garment is to be worn.

Though, a garment produced from a thick fabric will have more weight, and the freedom of movement of the wearer will be restricted. Clearly then a garment designed from an intelligent fabric, whose nature can change according the external temperature, can offer superior protection. However, such a garment must be comfortable for the wearer.

Temperature change effect of PCMs
PCM microcapsules can create small, transitory heating and cooling effects in garment layers when the temperature of the layers reaches the PCM transition temperature. The effect of phase change materials on the thermal comfort of protective clothing systems is likely to be highest when the wearer is frequently going through temperature transients (ie, going back and forth between a warm and cold environment) or from time to time touching or handling cold objects. The temperature of the PCM garment layers must vary frequently for the buffering effect to continue.

The most obvious example is changing of water into ice at 0° and to steam at 100°. There are many products that change phase near body temperature and are now being integrated in fibres and laminates, or coating substrates, that will alter phase at or near body temperature and so support the equilibrium of the body temperature and keep it more constant. It is for athletes in extreme conditions and people who are involved in extreme sports such as mountaineering and trekking. It is going to be used in industrial applications where people are very mobile, for example, in and out of cool rooms.

Effects on fabrics

When the condensed PCM is heated to the melting point, it absorbs heat energy as it moves from a solid state to a liquid state. This phase change produces a short-term cooling effect in the clothing layers. The heat energy may come from the body or from a warm environment. Once the PCM has totally melted the storage of heat stops

If the PCM garment is worn in a cold environment where the temperature is below the PCM’s freezing point and the fabric temperature drops below the transition temperature, the micro encapsulated liquid PCM will come back to a solid state, generating heat energy and a momentary warming effect. The developers assert that this heat exchange makes a buffering effect in clothing, minimize changes in skin temperature and continue the thermal comfort of the wearer.

The clothing layer(s) consisting PCMs must go through the transition temperature range before the PCMs change phase and either produce or absorb heat. Therefore, the wearer has to make some effort for the temperature of the PCM fabric to change. PCMs are transient phenomena. They have no effect in steady state thermal environment.

Active microclimate cooling systems need batteries, pumps, circulating fluids and latest control devices to give satisfactory body cooling, but their performance can be adjusted and made to continue for long period of time. They are, however, costly and complicated. Present passive microclimate devices use latent phase change; either by liquid to gas evaporation of water (Hydroweave), a solid to liquid phase shift by a cornstarch/water gel, or with a paraffin that is contained in plastic bladders.

The liquid evaporation garment is cheaper, but will only give minimum or short-term cooling in the high humid environment found in protective clothing. They must also be re-wetted to revitalize the garments for re-application. The water/ starch gel-type cooling garment is presently preferred by the military, and can offer both satisfactory and long time cooling near 32°F (0 degree Celsius), but it can also feel very cold to the skin and needs a very cold freezer (5°F) to completely recharge or rejuvenate the garment. When completely charged, its gel-PCMs are somewhat rigid blocks, and the garment has limited breathability.

The other paraffin PCM garments are comparatively cheaper, but their plastic bladders can split, thus dripping their contents or leading to a serious fire hazard. In addition, their paraffin PCM melts about 65°F (18°C) and must be recharged at temperatures below 50°F (10°C) in a refrigerator or ice-chest. Their rate of cooling also reduces with time because paraffin blocks are thermal insulators and control the heat that can be transmitted into or out of them. The plastic bladders used to contain the PCM also strictly limit airflow and breathability of the garment, thus reducing their comfort.

Uses of PCM

Automotive textiles

The scientific theory of temperature control by PCMs has been deployed in various ways for the manufacturing of textiles. In summer, the temperature inside the passenger compartment of an automobile can increase significantly when the car is parked outside. In order to regulate the interior temperature while driving the car, many cars are equipped with air conditioning systems; though, providing adequate cooling capacity needs a lot of energy. Hence the application of Phase Change Material technology in various uses for the automotive interior could offer energy savings, as well as raising the thermal comfort of the car interior.

Apparel active wears

Active wear is expected to provide a thermal equilibrium between the heat produced by the body while performing a sport and the heat released into the environment. Normal active wear garments do not satisfy these needs always. The heat produced by the body in laborious activity is often not discharged into the environment in the required amount, thus resulting in thermal stress situation. On the other hand, in the periods of rest between activities, less heat is produced by the human body. Considering the same heat release, hypothermia is likely to occur. Application of PCM in clothing supports in regulating the thermal shocks, and thus, thermal stress to the wearer, and supports in increasing his/ her efficiency of work under high stress.

Lifestyle apparel – elegant fleece vests, men’s and women’s hats, gloves and rainwear.

Outdoor sports – apparel jackets and jacket linings, boots, golf shoes, running shoes, socks and ski and snowboard gloves.

From genuine uses in space suits and gloves, phase change materials are also used in consumer products.

Aerospace textiles

Phase Change Materials used in current consumer products primarily were made for application in space suits and gloves to protect astronauts from higher temperature fluctuations while performing extra-vehicular activities in space.

The usefulness of the insulation stems from micro encapsulated Phase Change Materials (micro-PCMs) primarily created to make warm the gloved hands of space-strolling astronauts. The materials were accepted ideal as a glove liner, to support during temperature extremes of the space environment.

Medical textiles

Textiles having Phase Change Materials (PCMs) could soon find uses in the medical sector. To raise the thermo-physical comfort of surgical clothing such as gowns, caps and gloves. In bedding products like mattress covers, sheers and blankets. A product, which helps the effort to stay the patient warm enough in an operation by giving insulation tailored to the body’s temperature.

Other uses of PCM

Phase Change Materials are at the moment being used in textiles, which include the extremities: gloves, boots, hats, etc. Various PCMs can be selected for various uses. For example the temperature of the skin near the torso is about 33°C (91°F). Though, the skin temperature of the feet is nearly 30 -31 °c. These PCM materials can be useful down to 16°C, enough to ensure the comfort of someone wearing a ski boot in the snow. They are increasingly applied in body-core protection and it will shift into the areas of blankets, sleeping bags, mattresses and mattress pads.

PCM Types

Standard phase change materials are generally a polymer/carrier filled with thermally conductive filler, which changes from a solid to a high-viscosity liquid (or semi-solid) state at a certain transition temperature. These materials conform well to irregular surfaces and possess wetting properties like thermal greases, which considerably decrease the contact resistance at the distinctive interfaces. Because of this composite structure, phase change materials are capable of surviving against mechanical forces during shock and vibration, safeguarding the die or component from mechanical damage. Moreover, the semi-solid state of these materials at high temperature determines issues linked to “pump-out” under thermo-mechanical flexure.

When heated to a targeted transition temperature, the material considerably softens to a near liquid-like physical state in which the thermally conductive material slightly expands in volume. This volumetric growth makes the more thermally conductive material to flow into and replace the microscopic air gaps existed in between the heat sink and electronic component. With the air gaps filled between the thermal surfaces, a high degree of wetting of the two surfaces lessens the contact resistance.

In general, there are two types of phase changes materials:

. Thermally conductive and electrically insulating.
. Electrically conductive.

The main dissimilarity between the thermally and electrically conductive materials is the film or carrier that the phase change polymer is coated with. With the electrically insulating material, lowest amount of voltage isolation properties can be achieved.

Analysis of the thermal barrier function of Phase Change Materials in textiles

Producers can now use PCMs to give thermal comfort in a huge range of garments. But to know how much and what kind of PCM to apply, as well as modification of the textile, in order to make a garment fit for its purpose, it is essential to quantify the effect of the active thermal barrier offered by these materials.

The total thermal capacity of the PCM in many products depends on its specific thermal capacity and its quantity. The required quantity can be expected by considering the application conditions, the desired thermal effect and its duration and the thermal capacity of the specific PCM. The structure of the carrier system and the end-use product also affects the thermal efficiency of the PCM, which has to be measured with respect to the material selection and the product design.

Prospect of PCM

The main challenge in developing textile PCM structure is the method of their use. Encapsulation of PCMs in a polymeric shell is an evident selection, but it adds stiff weight to the active material. Efficient encapsulation, core-to-wall ratio, out put of encapsulation, stability during application and incorporation of capsules onto fabric structure are some of the technological aspects being measured.
Though PCMs are being promoted in various types of apparel and connected products, the applications in which they can really work are limited. As superior test methods are developed for PCMs, makers of PCM materials and garments will have to further cautiously target the markets in which their products do work well.

Conclusion

Since a huge amount has been invested in research and development in these areas in the developed counties, it is expected that very soon all-season outfits will be mass-produced. For example, in Britain, scientists have designed an acrylic fibre by integrating microcapsules covering Phase Change Materials. These fibres have been used for producing lightweight all-season blankets.

Many garment making companies in USA are now producing many of such garments, like thermal underwear and socks for inner layer, knit shirt or coated fleece for insulating layer; and a jacket with PCM interlines for outer layer, beside helmets, other head gears and gloves. Such clothing can maintain warm and comfortable temperatures in the extreme of both weathers. There is no doubt that textile which integrate PCMs will find their way into several uses in the near future.

A Comprehensive Look at PET Preform Moulds in Packaging

Thanks to the extensive research and application by scientists, right from the day when the Swedish chemist Jöns Jacob Berzelius produced the first condensed formulation of polymer in 1847 to British scientist Alexander Parkes who further developed cellulose material in 1861, moulding technology has progressed by leaps and bounds. However, the fruits of all these studies were best utilised by the American Hyatt brothers – John Wesley and Isaiah who conceptualised the maiden injection moulding machine in 1872.

Banking on these successful experiments, two German scientists Arthur Eichengrün and Theodore Becker further worked on soluble forms of cellulose acetate in 1903. These developments enabled another German chemist-cum-engineer Arthur Eichengrün to develop the first injection mould (spelt mold in the USA) press in 1919 that he patented two decades later.

The World War II lent more fillip to this technology and in 1946 American inventor James Watson Hendry designed and fabricated the first screw injection machine with better precise control over the speed of injection and the quality of articles produced. Over the next 24 years, Hendry came up with numerous versions including a gas-assisted injection moulding process, which permitted the production of complex and hollow articles that cooled quickly.

All these products evolved form injection molds found innumerable applications practically in every utility item under the sun such as automotive, medical, aerospace, consumer products, toys, plumbing, packaging, and construction. In particular, ever since polymer chemists formulated polyethylene terephthalate, PET in short, the packaging segment has witnessed great strides.

Thus today, this technique of preform moulds plays a vital role in the packaging industry and the quality of the moulded parts and end products. And this depends on the manner in which the moulds are designed and developed. In fact it is metallurgical engineering skill of a high degree that calls for a fully equipped tool room facility.

This has a great and direct bearing on improved design flexibility and also the strength and finish of manufactured parts alongside reducing production time, cost, weight and waste. For instance, a leading provider of PET packaging solutions boasts of one of the best tool rooms to design and fabricate such high quality preform moulds.

These products range from 4 to 750 grams and vary between 12 to 150 millimetres in neck sizes with cavity ratings touching the mark of 72. As for the caps and closures, the moulds range from 32 to 48 cavities. The credit for such achievements at this particular unit could be attributed to the capabilities of the design engineers. Yes, the in-house design studio here has integrated and computerised processes of product design, including analysis and simulated application of the mould.

Latest software has ensured consistency in various dimensions of the moulded products. Reportedly, efficient heat transfer and coherent cooling through technically embedded cooling channel structure and raging water flow are the hallmarks of these preform moulds.

In addition to these aspects of top quality, different companies have also been providing periodical refurbishment of the moulds for the end users as part of their turnkey services in packaging technology.

Biotech Start-Ups Can Benefit From the Services of Chemical Toll Manufacturers

Starting a new biotech company can be an expensive proposition so outsourcing certain functions to chemical toll manufacturers makes sense. Creating the infrastructure of a new firm can take million and millions in capital, establishing support personnel much less labs and research facilities. Branching into manufacturing, when you do not yet have a commercially viable product successfully marketed, means saving money by outsourcing.

It makes sense as a life science company to remain in the research and development position until one or more products can be marketed to bring in much needed capital for expansion. Therefore, your company should remain the “creative factory” while your outsourcing company handles the actual product development, testing and packaging.

Full Scale Options

The right chemical toll manufacturers will have the facilities for a full scale production from laboratory testing to creating small batches of product to full scale production. They must also offer raw materials storage, lab analysis and a variety of equipment to complete your project from the development phase to packaging the final product.

If you have a completely new product in the research and development phase, the right outsource company can help you with custom synthesis. Therefore, the ability to handle polymers, pre-polymers, fine chemicals and dyes is important. Fine chemicals handling in particular is important because you might require special help for alcohols, acids, aromatic compounds and more, including a drying phase.

When interviewing the top manufacturers in the chemical world, make sure they have full scale production capabilities by checking out their equipment. The outsourced company should have reactors, centrifuges, driers, indoor storage areas for materials as well as a full scale laboratory for analyzing fine chemicals, polymers and more.

Other Important Requirements

For a successful launch of your new biotech firm, it is important that the outsourced chemical company you choose have exacting safety standards and protocols set in place to prevent any chemical disasters. Do they meet the proper ISO industry standard ratings? Do they follow a responsible environmental safety plan? Do they maintain compliance with their country’s established governing laws?

Another aspect to check is the training of the chemical workers. Does the company offer routine safety checks and training? Are continuing education courses offered so that workers are up on the latest safety protocols and technological advances? When your outsourced chemical toll manufacturers have exacting standards in business and in safety, your new life science start-up firm will benefit, creating an environment more conducive to success.

Particle Imaging Techniques

What is particle imaging used for?

Where particle size analysis is used to produce a distribution curve showing how large the majority of particles in a given solution are, particle imaging also provides the ability quantify morphological (ie. shape) characteristics of particles.

Determining particle shape parameters

When reporting particle size, we try to report just one single number for each particle; the equivalent spherical size. In image analysis reports, this is often termed the CE diameter (or Circular Equivalent diameter). However, when it comes to reporting particle shape, there are many numerical descriptions that can be used, including: length/width, aspect ratio, circularity, compactness, roughness, convexity and elongation. Most image analysis system also report parameters such as lightness/darkness, opacity and intensity. All of these parameters help differentiate one type of particle to another, which is one of the real strengths of image analysis.

Where particle sizing can only report a size distribution, image analysis can be used to quantify subtle differences in shape or optical properties. New image analysis systems also provide powerful software packages that enable classification of particles into different groups. This in turn enables users to quantify different types of materials in the one sample.

How FlowCAM works

FlowCAM is one of the more popular of the new age particle imaging systems. This system counts, sizes and images particles in a sample. The FlowCAM also provides the option of colour analysis and detection of living organisms by means of fluorescence. The measurement process is as follows:

* Particles are suspended in water
* The water is pumped through a flow cell
* Optics and a CCD camera magnify and capture an image of each particle, measuring its shape and size
* The results are displayed as a scattergram.
* The user selects distributions to display, and regions in the scattergram of particular interest can be selected and displayed in more detail.
* A library of information is housed in the system for screening future samples, if necessary.

Real life applications

In real life, particle size and shape determining technologies like those FlowCAM incorporates are used in applications like:

* Water analysis for environmental purposes, measuring things like plankton, algal blooms and levels of sedimentation
* Biotechnological settings, where quantification of enzymes or fermentation processes is needed
* Process monitoring, which covers most industrial applications – monitoring emulsions and dispersions, and in the polymer and pharmaceutical industries.
* Formulation monitoring, used for solid substances like topical cosmetics, flavour carriers, inks or pigments.

Universal Plausibility Metric Falsifies Evolution

Carl Sagan, the famous skeptic who died of cancer some years ago, often invoked the phrase “billions and billions of years” when he asserted that evolution had enough time to work. Random collisions of atoms in a primordial warm little pond somewhere were supposed to produce the first living cells if simply given enough time.

Modern observational science has progressed to the point where we now know the exact composition of bio-polymers (proteins) and the probability of even one protein forming by random interaction of chemicals.

Scientists as prominent as Francis Crick and Harold Morowitz have done probability analyses and shown that random formation of just one protein molecule, even one with “only” 200 precisely sequenced base pairs, is so remote as to essentially never happen.

Those who cling to evolutionary dogmatism have nevertheless held on to their appeal to “deep time” which is to say “billions and billions of years.” Unfortunately we can’t put billions of years into a test tube and do observational experiments on them. So we have a stalemate.

Random formation of proteins is theoretically possible but should a scientist regard such a hyper-improbable random event plausible and worthy of scientific respect?

Recently there has been some significant movement on this issue of deep time and it looks like the stalemate has started to be resolved even in the highest scientific circles. A peer-reviewed article in a prominent science journal has introduced the “Universal Plausibility Metric” (UPM) as a means of objectively deciding if an event is actually plausible and not merely possible.

The article by David L. Abel is titled “The Universal Plausibility Metric (UPM) & Principle (UPP)” in the journal “Theoretical Biology and Medical Modeling.” Vol. 6:27 (Dec 3, 2009). The following is a quote from Abel’s article: “But at some point our reluctance to exclude any possibility becomes stultifying to operational science. Falsification is critical to narrowing down the list of serious possibilities.”

“Almost all hypotheses are possible. Few of them wind up being helpful and scientifically productive. Just because a hypothesis is possible should not grant that hypothesis scientific respectability.” Abel goes on to describe the mathematical contents of the Probability Metric and how it can be used to falsify hypotheses if those hypotheses are based on utterly remote possibilities.

Abel’s article gives much ammunition to the creationist and intelligent design movements in the quest to falsify the notion that life came about by random interaction of nonliving molecules. The UPM is an objective standard from a respected scientific source. Evolution fundamentalists will find it impossible to hide behind “billions and billions of years.”

To look at a specific example of a probability analysis that could be held up against the UPM let’s consider the work of Francis Crick, Nobel prize winning co-discoverer of DNA. Crick did a probability analysis on the possibility of one simple protein forming by chance.

The protein would consist of just 200 amino acids in a polypeptide chain. He describes this analysis in his book “Life Itself: Its Origin and nature” on pages 51-52.

Crick found that the number of random possible sequences of the 200 amino acids was ten to the 260th power which is a one with 260 zeros after it. That number, ten to the 260th power, is far more than the number of atoms in the known universe!

In other words to get the right sequence to form the protein would be one in ten to the 260th power or far less of a probability than randomly choosing the right atom out of all the atoms in the universe! Even if many billions of years of random interaction of atoms is considered it does little to diminish this immense hurdle of hyper-improbability.

An application of the UPM to Crick’s analysis would show that the one simple protein would never form by chance. A protein forming by chance is nevertheless possible but is totally implausible and thus should not be given scientific respectability.

For argument’s sake let’s say that a protein did form by chance, against all odds, in the primordial soup. To form a living cell, that protein would have to combine with trillions of other proteins (which would also have to form by chance!) and then form a structure, by chance, to approximate a cell wall, nucleus and mitochondria of a cell and then start the cell to begin living and reproducing.

The monstrously high improbability of such a scenario should cause all scientifically minded people to falsify the notion of life coming from nonlife. If this first step of evolution falls, the whole evolutionary edifice falls. The same utterly remote probabilities apply to all steps in the theory of evolution. Genetic mutations, the supposed mechanism of evolution, must also cope with utterly remote probabilities.

Genetic mutations are mostly destructive errors to the DNA. To put feathers on a lizard, for example, would require a favorable macromutation that would add a long strand of properly sequenced base pairs to the DNA.

Though theoretically possible such a favorable macromutation has never been observed. A step by step series of small mutations to gradually put feathers on a lizard would also be implausible because partially formed feathers would be disadvantageous to the lizard. The needed macromutation with hundreds of properly sequenced base pairs would be as hyper-improbable as the chance formation of the protein we discussed above.

We can confidently say that virtually all steps in the alleged macroevolution process would fail the UPM test. Evolution is falsified.

Secular fundamentalists have relied on evolution as their origins myth and have fooled many into accepting evolution on the false claim that it’s based on science. It’s becoming clear that secularists must abandon evolution and since no other secular grand theory of origins is waiting in the wings to replace evolution, secularists are obliged to return to God.

What Is A Capillary Column And Types Of Capillary Columns?

A capillary column for GC is fundamentally a very slender tube with the fixed phase veneering the internal surface. In packed columns the static phase is glazed onto the packing elements. A capillary column comprises of a couple of sections – the tubing and stationary phase.

Fused silica and stainless steel are the chief tubing ingredients. Moreover there are plenty of stationary phases- such as high molecular weight, thermally balanced polymers that are liquids or gels. But the most commonly seen stationary phase are polyethylene glycols, polysiloxanes and some tiny permeable elements constituted of polymers or zeolites.

Types:

In gas chromatography, mainly three types of capillary columns are used.

  1. Wall Coated Open Tubular (WCOT)
  2. Surface Coated Open Tubular (SCOT)
  3. Fused Silica Open Tubular ( FSOT)

Wall Coated Open Tubular (WCOT)

Here in the interior wall of the capillary column is layered and veneered with a very fine layer of fluid stationary phase.

Surface Coated Open Tubular (SCOT)

The capillary tube wall is layered with a skinny strata of solid balance on to which fluid phase is immersed. The separation efficiency of SCOT columns is higher than WCOT columns due to the enhanced surface domain of the stationary phase layer.

Fused Silica Open Tubular (FSOT)

Walls of capillary fused silica tubes are reinforced by a polyimide coating. These are malleable and can be twisted into coils.

Uses of Capillary Column in GC

Gas Chromatography is a universally used analytic procedure in many scientific research and industrial laboratories for quality analysis as well as recognition and quantitation of composites in a blend. GC is also a regularly used technique in many ecological and forensic labs because it permits for the exposure of very tiny volumes and quantities.

A wide variety of tasters can be investigated as long as the compounds are appropriately thermally balanced and rationally unstable. In all gas chromatography analysis, the separation of various compounds happens because of their collaboration with the stationary and mobile stages. Such as in simple chromatography a solvent (water and alcohol.) drifts over the paper (stationary) flowing the sample with it.

Principle of Operations

The diverse compounds which constitute the sample will drift more or less sluggishly depending, in simple terms, on how much they cling to the paper. The stickier amalgams move more unhurriedly therefore move a smaller distance in a stipulated time subsequent result being separation.

In gas chromatography the gas is the mobile stage, the column veneer is the stationary stage and the boiled element is alienated by how long the essential compounds take to appear from the other terminal of the column and flow into the detector. This is known as the retention time.

One can acquire columns layered with various stationary phases banking on what type of compounds one wishes to examine as the type of stationary section will regulate which compounds pass over it faster or slower.

New Engineering Materials

There have been a number of science fields which have helped us in producing new engineering materials. Some of the fields are the nano engineering and the forensic engineering. Other important field is the failure analysis .as we know that with new inventions taking place every second facing different failures is natural. So this field of science helps in analyzing these failures and rectifying them. Hundreds and hundreds of scientists and inventors are working and experimenting continuously to make this world a better place to live.

These new inventions have gradually changed the course of living of people. Now these inventions can be seen in every field that has direct or indirect link to the humans. To consider new engineering materials we consider the Polymer Engineering. This field of science develops new concepts which are based on the materials that are polymeric in nature. These materials are not a result of single engineering technology but these are obtained or produced from a blend of different technologies.

To understand this blend of technologies we consider the example of fiber reinforced blends in which the preferred wetting material induces a synergy. It holds together the fiber reinforcement with the polymer blending. It leads to a high stiffness and rigidity which is maintained even at very high temperatures.

Getting into the depth to understand the phenomena we have to have a look at the kinetics of the system that is the wetting material. It is important for us to understand the microstructures of this system that exists in three states and form this highly stable structure. The material if understood can lead us to the explanation of the phenomena and shows that how sometimes these microstructures are highly conductive in nature. Taking another example would be using the nano-fillers which are used to alter the polymer. When we increase the rigidity it is possible that we are able to obtain good compressive strengths of this material.

Some new engineering materials have been introduced at the SAMPE 2000. One of the inclusions was ProTechtor.

ProTechtor – It has superb blast containment characteristics… It is obtained or produced from fibers. It has a density range which starts from 3.5 to 128. It has numerous advantages such as firm core and moreover the core can be made insulating. There are a number of other materials which are helping in one or the other way in standardizing the lives of people.

Processing Chemicals – Outsourced Manufacturer Analytical Capabilities Important

When you required an outsourced manufacturer for processing your chemicals for your various product lines, it is important that they have exceptional analytical capabilities. You want a well-rounded chemical manufacturer that has the capability to grow with you, particularly when you are trying to develop a new product or improve an existing one.

Typical Equipment a Contractor Should Have

Equipment for involved gas chromatography is essential. Chromatography, in bare bones, is the separation of mixtures; therefore, when it refers to gases, you are dealing with possible volatile components such as solvents and monomers. Typical applications of gas chromatography include separating different components in a mixture or even testing the purity of a material. In some cases, this process can also identify the ingredients in an unknown chemical compound or mixture.

Facilities for reversed phase high performance liquid chromatography (HPLC) are important too. This process is a type of column chromatography which involves the purification of individual compounds from mixtures and can be used for dealing with comparative scales.

Yet another type of chromatography should be considered an absolute must in your quest for a contractor with outstanding analytical capabilities. It is called size exclusion chromatography. This process helps determine the molecular weight of different polymers. These polymers, often used in plastics, are connected by covalent chemical bonds.

Fourier transform infrared spectroscopy is another facet you should confirm when shopping around for a manufacturing contractor. This type of spectroscopy is a measurement method by which arrays of particles are accumulated based on specific measurements based on the property of waves from a radiation-based source.

Manual and automatic titrations are yet another important analytical talent the manufacturer’s laboratory should have. Titration is a common technique of quantitative analysis in which the concentration of a recognised reactant is determined. This method is also called volumetric analysis because measurements of volume are involved.

Partnerships

The chemical contractor’s ability to forge partnerships with other analytical laboratories for third-party testing and substantiability is important too. Look for these collaborations as they can provide an even larger range of analytical services you may eventually need. Not every chemical manufacturer’s lab has 100% of testing mechanisms and equipment it needs to perform every test imaginable. That is why partnerships with other expert laboratories in important. The bottom line is that you need an effective contract manufacturer for your product line and that means the equipment and resources to process and analyse your chemicals.

Growth Begins Through Your Core, And So Does Decline

I have a feeling the Yellow Cab company wishes they had thought of the Uber app. Lets face it, most of us didn’t wake one morning and wish we could jump in a car driven by someone that just pulled up in their personal car and relatively unchecked by local officials. I use Uber as a preference because of the usefulness of the app, not because I prefer the driver. I will admit the car is almost always cleaner and something about the Plexiglas barrier in a cab makes me feel I am being taken downtown for questioning. The main reason I use Uber is that I can find a ride while in the comfort of my home or hotel room and I am notified as they pull up. When we are done, I hop out and save time by direct billing to my credit card. It is easy and that earned my business. While Uber has yet to make a profit, they essentially setup a taxi company in every major city in the US and many overseas without capitalizing a single car into their fixed assets. There was a well established, generations old taxi industry stuck in its ways, not improving and innovating in their core.

This series, Double-Digit Growth in a Slow Economy, discusses the methods that have successfully been used to drive growth when you aren’t able to count on a growing economy. We reference actual cases and companies that were transformed into growth engines beyond the natural buoyancy of economic growth.

This installment discusses the need to begin your growth strategy with your core business. Strength in your core business creates most of the incremental growth opportunities. A strong and vibrant core business also keeps completion at bay. In addition, a well run core business is the funding source for growth investments.

Part 1 of 3 – Growth begins through your core

The strength of any growth plan when there is no “free growth” emanates from the strength of the core of your business. Shore up your core business first. It is an imperative foundation to fund your growth as well as granting you permission for growth. You need to be the category leader or at least on the Mt Rushmore of your category. The scope of “core” meaning core products as well as core channels. This is your strength to get to the growth table. Of course you need a compelling case for your customers to go through the burden of a change. While the actual growth may come from something outside the core, the core provides most of your credibility to do other things. If you can’t manage your core well, how can you do something new or incremental?

Credible and Competent

If there is little buoyancy in the economy, then your gains need to come from competitors. What we are talking about here is the need to gain market share, to beat a competitor at their own game. It may come in the form of new space on the shelves for your products, more points of distribution, or more range of price points that will lift your business. We have to be credible and compelling. Credible meaning I can see how that company could be a bigger supplier to my company. Compelling meaning there is actually a business benefit for the customer to consider. A gap in either one is a weakness in convincing the market to shift more your way.

Your core may be declining. I see this in more cases than you might imagine. Too often no one is able to see the signs until some damage has been done. If someone else is better positioned to provide your core goods in your core channel that needs to change and change quickly. I often hear excuses intended to rationalize why it is OK to be losing sales in your core. Unless you are planning an extreme makeover, your core is your fuel, your funding and the basis of your mojo.

“We can’t make any money in that area anymore.”

And someone else can? You need to unbundle the reasons why and ask how you can reach a cost that would allow you to continue to succeed. It is a pretty simple formula backing up from the retail or trade price of items. My approach takes a retail price offered by our customer for these competing items and subtracts their known margin rate to the best of our ability. We have some knowledge since we know their margin on our goods. We now have a frame of reference for their acquisition cost of the competing goods. Taking our margin out next leaves us with an acquisition cost target for our business. Can we build the goods for that cost? Can we acquire them at that cost? It may not all be in physical product costs. It is more likely a blend of product costs and overhead costs. The point is to equally evaluate product costs, program costs, and SG&A. Chances are you have imposed some limitations without explicitly doing so. Perhaps the limits are on how you will acquire goods, making versus buying for example. Most manufacturing companies will hardly consider sourcing some of their goods where lower costs may exist. It is very difficult for most companies to step aside from their current practices and challenge the way things have always been done.

I have been through this exercise many times and the first phase is usually focused on disbelief that anyone can sell at a price lower than our company and make money. Probably not true. While there are “loss-leader” items out there, it isn’t all that common. Let’s assume someone can produce the goods at the necessary acquisition cost. What would we have to do to get there? We often start with product redesign, value management, etc. Those are important things to do and help take out unnecessary product costs. Don’t forget the other costs. What are your programs, discounts, and policies? They may be excessive and while you have to fund those elements, your competitor may have taken a net price approach or taken some in price and some in smaller programs. Would you believe a company would fund programs that equaled 22% of sales? I inherited one. This is a difficult legacy to reverse as an incumbent supplier. Gross to net calculations are something the finance team can do to help identify these costs.

There are some other not-so-hidden costs within SG&A. If your SG&A is 22-25% in consumer durables, you are looking at a part of your challenge. Typically a new entrant that takes space from your core has at least one actual advantage, your existing business is their incremental business. It is always easier to justify investment when the business is incremental. This is likely the only true advantage they have aside from physical differences you can identify.

Customers as competitors

In each business I have led we faced a very familiar competitor. To varying degrees, our customers were also competitors as they developed mature direct sourcing operations. Initially this is highly concerning. After all, if they can find direct sources from low cost countries they are undoubtedly increasing margin rates and will seek to shift business. Fortunately for a good manufacturer or distributor this is not always a total loss or even long-term loss. It does require your organization to minimize your cost structure and that is healthy. There are advantages from using the traditional supply chain over a Low Cost Country (LCC) direct sourcing model. The customer has to take on responsibilities long held by the domestic supplier. Inventory ownership, inventory management, investment in new products, warranty costs, shipping and logistics, and transitional costs are often overlooked initially in direct sourcing models. To compete here we emphasize the need to strengthen your core and your service levels. I have found in a number of cases that business moved away in a direct sourcing effort often comes back in a reasonably short period of time. It may not come back in its previous form, so flexibility on your part is critical.

Taking on the role of your supply chain looks better through the lens of margin than it does through the lens of management. You have to acknowledge that there is only so much margin, or mark up, you can sustain before you start driving your customer to seek an alternative. What this motivated me to do was to drive as much waste from the business as possible. Because I had limited room for margin, I could not afford to have waste that I was covering in price that in turn may drive my customers to replace me. Companies with extremely good margin rates should always defend their position, but great margins invite competition in some form. The price / value relationship defines how sustainable a strong margin rate will be.

Once we minimized waste, we had to emphasize value. We invested in creating a more dynamic offering. Taking more to our customer in the form of new designs, features, programs, promotions, and analysis of how they could grow their sales added value that was that much harder to replace in a direct sourcing model. The final driver to seek other sources is when we won’t play ball. There are often hard lines drawn somewhere. Many companies I have led had stated objections to providing private label goods alongside their traditional brand of goods. I found this to be an important offensive move and an important defensive move. The customer is going to source private label goods. If they source elsewhere, what are the limits of the value proposition found in those goods? Will they have all of the features of your higher cost goods at a lower price? If you are the supplier, you can play an active role in the design and value creation of the offering. In addition, you avoid a new entrant that has an interest in increasing the span of private label goods at every opportunity. If it is you, you can play a role in balancing that range. Would you like to hold the lower 25% of your category of goods at a lower margin or invite someone else to hold that space?

Private label goods are a growth opportunity for most business, not a threat. They are going to exist. You have a choice to be in the mix or not. You do not need to provide the full suite of programs and support for the private label, so while at a lower price it need not be at a dramatically lower margin. If you have read my prior writings, you will also note it is about creating EBIT dollars, not holding a margin percentage. A strategic supplier should be able to provide and manage an entire category of goods for a channel partner. I recall the first time I told Home Depot our company would design and provide their private label goods. The room went silent. I was asked to restate my position. My predecessors had been so steadfast in their position that we were not even considered a source when the opportunity opened up. Explicitly proposing it showed we were a partner for the entire category, not just when it was in the interest of our brand name.

The peanut butter approach leads to poor business-nutrition

Companies and financial organizations often assume their overhead structure is uniform across various business types. It is unlikely to be the case. Customers have varying costs to serve. There are more risks in one business versus another. Dissecting this variance can be important when your business is under competitive threat. Lets use an example where a business has a general cost structure of 35% on top of cost of goods accounting for SG&A, distribution and logistics, loss, shrinkage, etc. Not all parts of the business generate or require an equal proportion of costs. No matter how hard you try to avoid it, cost-plus pricing is usually our starting point in establishing price.

Cost + margin requirement = price?

We all know it is not market based, but we all do it at one time or another. The more you do, the more you apply a general overhead structure across all of your categories of business and customers. Your competitor may not be doing this. They may allocate their overhead and direct costs correctly meaning that some of their business cases can be seen as profitable when others would see them as marginal. It also means that these allocations have to go somewhere and this part of the exercise leads you to a much better understanding of what part of the business is driving your cost structure. If one part of the business has lower direct costs than another then those wrongly applied costs will now be correctly applied to the part of the business where it belongs and that pressure will lead to asking the right questions. I call this process creating a segmented P&L model. We need to create detailed P&Ls for segments of your overall business and have accountable leadership responsible and reporting on each P&L. It is common to have 10 or more segmented P&Ls in a business. It is an excellent management tool and will be discussed in greater detail in this series. The take away for the topic of managing your core business is that not all business segments have the same overhead burden and direct costs to serve. Knowing this allows you to manage your business in a much more informed way. Mark ups or margin rates can vary. Force your overhead lower when margins are lower. The segment P&L will show you what costs are driven by various parts of your business. If you don’t like what it looks like, it is time to intervene and make a change in the cost structure.

A good exercise to test the vulnerability of your core business is to have a team perform an exercise acting as if they are a new challenger to your business. How would they enter the market? What advantages can they create? How would they overcome barriers to entry? Should we leapfrog the current channels of distribution? How can they take business from… us? The answers that come from this exercise may lead to initiatives you may take to strengthen your core business. If the thought of a competitor taking any of these actions concerns you, it is always good to act by implementing these things yourself.

Self-imposed barriers

It is likely you have placed barriers in the way of continuing to lead in your core without realizing it. Habitually doing things the way we have always done them is at the root. Not challenging our barriers is the problem. “How can I… ?” is not often asked before a crisis occurs. Trying to preserve a given supply chain that is too costly, perhaps a plant. Not driving costs out of the goods so you can be competitive. You may have too high of an overall cost structure, the combination of SG&A and product costs. You may be protecting an underperforming sales or marketing function by assuming they are better than they are. You may not be looking for ways to cut out burdensome steps your customer faces in doing business with your company. Whatever barriers are preventing you from gaining in your core are the ones that may allow someone else to take that business from you. Study what they do better or differently. Often times we don’t understand the complete benefit of what someone does differently than us.

On Southwest “Bags Fly Free”.

What is the benefit to the company? Greater ticket sales from passengers who do not like baggage fees? Perhaps. That is the benefit people often perceive, but the ones that are overlooked go deeper into Southwest cost structure and overall efficiency. Airlines that charge for bags provide an incentive for passengers to carry-on. This brings a large number of bags to the gate. That means more bags through security, longer lines, higher TSA costs, which airlines pay and recently even had to supplement with their own employees to help get passengers through Atlanta for example. Look at the boarding process. The process of boarding becomes a race to board first to have space for your bag. Then we have the bags that won’t fit. Often discovered onboard with a few passengers that cannot find space. We now redirect the flight attendant to look for space and then when we are completely out we have a few passengers that have to walk their bags back to the jet way where we have to redirect someone from the gate and ground crew to get those bags in the baggage compartment. We have to tag it to the destination and load it. We cause a much longer boarding time and redirected 3 employees from their primary tasks for the exception processing. Southwest has a competitive cost advantage by not charging for bags. While other carriers don’t charge high rollers, they drive a tremendous number of bags into a less efficient process. It is all a component of total price, but one method creates a great deal of overlooked costs. Southwest doesn’t have a gate agent begging for gate checking of bags. They board the flight and get their fixed asset back in the air.

Baggage fees are promoted in business articles as a large revenue stream that has become the key to airline profitability. The more airlines charge, the more bags that come to the gate. I pondered this recently while boarding a flight and thought, at $25 per bag many come to the gate. At $7 per bag would most be checked? If 3x more were checked at $7 per bag would the airlines have nearly the same revenue and yet more efficiency in processing the bags?

What is your core?

The core of your business is important to define and manage. It is the source of your scale and should be the financing mechanism for growth. A strong core gives you permission to grow and expand into additional categories and with additional customers. A weakened core stresses the business and leaves customers less convinced in your value proposition. Losing control of your core is not as uncommon as it would seem. Every business I have led had lost control of their core. Too often declines in the core business are explained away little by little. An unattended core tends to become static and often leaves room for competition to creep in. Private label can also become a larger competitor to your core if it is static. A dynamic core tends to keep competition at bay and as we will talk later, private label plays a role, but a competitor need not supply it. Supplying a private label line your customer is seeking is a good opportunity both offensively and defensively.

The core of your business is often the legacy business and represents not only a majority of sales, but also a majority of margin dollars and most of the resources of the organization are designed to support it. The core will have mature systems and supply chain. The core of the business is not only goods, but channels. It is very common for the core business to have branches that are non-core. A business I led in 2010 had 58% of all sales through one big box retailer. It would be easy to define the core as that customer. It is the majority of sales. We were highly tailored to serve that customer’s systems and it was a priority. Part of that 58% was two small programs that made up 6% and 1% respectively. These programs were only sold to this customer and had completely separate supply chains and were totally separate product categories. These smaller branches were non-core because of the differences in supporting those businesses.

The second largest customer represented 15% of sales. Non-core because it is much smaller, right? Well, the channel was another big box retailer in a similar space and the goods were the same type. Although not identical SKUs, the supply chain was the same and the same processes and resources managed the businesses. This business would also fit the definition of our core. The third largest customer represented 12% of sales. It was also a big box retailer, but had served a very different market and had much different product and service requirements. The type of goods was similar to our previously defined core. The supply chain challenges were different. While it was a good business, it would not fall into the definition we are seeking for the core business. This in no way means we deemphasize this business. On the contrary, it may mean it is one of our growth engines that is enabled from our strong core since it is adjacent.

Sometimes a company will define its core not in the form of a business, but of a process. Cabinet manufacturers may feel their core is woodworking. A faucet company may feel their fundamental competency is machining brass. These are perhaps core processes that are critical to cost control, but they are not core “businesses”. If you define by a core technology, you are almost certain to face a challenge at some point that threatens the defined core technology. By 2010, faucet manufacturers were required to remove lead from brass alloys used in faucet construction for products sold in the state of California. Rather than building two types of faucets, a standard design was highly preferred. New technology was available that used polymer parts in the place of much of the brass in the original designs. While that was a core processing technology, the core business was faucet manufacture for sale through retail home centers and large wholesalers in the US and Canada. The core business was unchanged by the regulations, but the core technology shifted from brass machining done in-house to high-tech polymers sourced outside. Most other competitors failed to question the material use based on large investment in brass machining centers and a lack of exploring opportunities.

Have you heard of FW Woolworth Company?

It was once the largest retailer in the US and was the builder of one of the first skyscrapers in New York, the Woolworth Building. Their core business was a self-service retail store called a five and dime store. By the late 1980’s their core was gone. Their core business was replaced by other retail formats. They didn’t retain their leadership advantage by reinventing their core business and having a dynamic core. Their core did provide great funding for their expansion. The Woolworth building in New York was paid for in cash and the company made efforts to expand the core, but they forgot to maintain the core. One of their investments funded by the core led to where the FW Woolworth Company is today. The FW Woolworth Company is now Foot Locker.

Your core includes the.com space.

It may not be in your core yet, but your core goods are there and if it isn’t part of your core channels, you are missing out. Often by self-limitation with fears of channel conflict. At other times by lack of familiarity exactly how much of your category is being sold online. Chances are your core customers are selling online today. Are you integrated into that? Can you serve those sales on a direct ship basis? I advise companies to embrace the change and to design their systems to serve this channel in a broad way and, directly.

Why the focus on your core?

You will struggle to grow from a weakened core business. Risks to the core are often overlooked. Believe it or not, core is often neglected. In 25 years I have been involved in leadership of 4 companies. All of those companies lost focus on their core and had to retrench after losing significant share by not remaining focused on the core. Sometimes a company will lose its core entirely. In my case, one company had lost $140m and another nearly $1b in sales in its core. It wasn’t a market force that led to the loss. Recovery was a multi-year effort and the make up of the company was quite different. Worse yet, investors suffered through the decline and the recovery. The lost time having to rebuild should have been focused on growth extending from the core, not restoring the core.

Syringe Filters – Most Compatible Filters

A syringe filter is a wheel shaped filter. It contains a plastic body with a membrane used for purification of contaminants. There is a universal Luer lock present for versatility. These are mainly used in removal of microorganisms and particles from liquid or gases. They contain a syringe coupled with filter membrane. They are available in various pore sizes and with many hydrophilic or hydrophobic membrane materials. Syringe filters are used in HPLC Sample preparation, Routine QC analysis, food analysis, environmental sample analysis, bio fuel analysis and removal of protein precipitation.

They have applications in many industries like Pharmaceutical, Food and Beverage, Environment and general laboratory. In HPLC they remove the particles from the sample before it enters the column. This increases the accuracy of results and life span of the column.

These filters can have a membrane of Polypropylene, Nylon, Mixed cellulose, PTFE and PES. Filter selection depends on the nature of the sample used. Aqueous samples are purified with hydrophilic membranes like nylon syringe filters or mixed cellulose syringe filters. The mixed cellulose ester membrane is ideal for sensitive biological samples. They mainly constitute cellulose nitrate and cellulose acetate.

For non polar solvents hydrophobic membranes are used such as Polytetrafluoroethylene (PTFE). It is a chemically resistant polymer and can be used for highly corrosive and aggressive solvents. Acids and bases are purified through them because of their high stability. They also have high thermal stability. PTFE membranes can also be used to prevent moisture in air vents.

Protein samples are purified using low protein binding filters polyvinylidene difluoride (PVDF) or Polyethersulfone (PES) membranes. They are hydrophilic in nature. For inorganic samples used in ion chromatography PES membranes are preferable. Maximum sample can be recovered by them. They are available in both forms, sterile and non sterile. PVDF membranes bind very low protein in comparison to other membranes and have a broad chemical compatibility.

Propylene membranes are hydrophobic and can be used with aqueous and organic solvents. Pore size and diameter should be considered while choosing membrane of these filters.

Glass fiber syringe filters are used for samples which are hard to purify such as media used in tissue culture or samples with large particles. They can easily sieve viscous substances. They are also used in monitoring of air pollution and the presence of oil and smoke in the air. They have high flow rates and high dirt holding capacity.