Saturday, March 17, 2012

The Lunar Express

There is a posting on this blog titled "The Westbound Rule". This is an effort to make routine flight more efficient by parceling air corridors to take advantage of the earth's eastward rotation. I explained how it would be best if eastbound flights flew as low as is practical, because the earth's rotation is working with them, while westbound flights should fly as high as possible, to keep a distance from the rotation that is working against them.

Today, I would like to apply a similar concept to space flight between the earth and the moon.

The astronauts of the Apollo missions of around forty years ago obviously wanted to land on the moon where the sun was shining, not in the dark. The daylight on the moon made the mission much easier, particularly the taking of photographs, than it would been in the dark. But it is my observation that this may not be the best in the long run if lunar flight ever becomes routine.

Another factor in the lunar missions involved launches. The take-offs of the rockets during the afternoon was to accommodate the audiences and so that daylight would minimize the chances of errors and complications. But this meant that the spacecraft was launched eastward, along with the direction of the earth's rotation, and thus had to outpace the earth.

While this may have been good for public relations, it certainly was not the most efficient path.

Picture the moon orbiting the earth, as the earth orbits the sun. Of course, this is only an "apparent" orbit, as I described in the posting "The Earth, The Moon And, The Sun", simply because, at the moon, the gravity of the sun is more than twice as powerful as that of the earth. However, that is not very important for our purposes here.

Let's review the mechanics of the moon, as seen from our perspective on earth.

The moon orbits the earth every 29 days, in the same eastward direction that the earth rotates. This is why the moon rises 50 minutes later each day or night. 24 hours divided by 29 equals about 50 minutes. The same side of the moon always faces earth because the moon's rotation period, or day, is the same as it's orbital period.

The phases of the moon that we see are due to the changing angles between the earth, moon and, sun. Full moon is when the moon is on the opposite side of the sun from the earth, so that those on the night side of the earth see the moon fully illuminated by the sun. Unless, of course, there is a lunar eclipse. This happens when earth, moon and, sun are in the same lateral plane in a straight line so that the earth casts it's shadow on the moon.

New moon is when the moon is between the earth and the sun so that we cannot see the moon at all. A solar eclipse can happen at this point, if all three are in the same plane and in a straight line. Eclipses do not occur every month because there is a difference of about 5 degrees between the moon's path around the earth and the earth's orbit around the sun.

We see a half moon when the moon crosses the earth's path around the sun. A half moon when the moon's phase is waning, or getting less, between full and new moon, is when the moon crosses the earth's orbit in the direction from which the earth has already passed. A half moon when the phase is waxing, or increasing, is when the moon crosses the earth's path in the direction in which the earth is heading.

At sunrise, the direction overhead is the direction from which the earth has come in it's orbit around the sun. At sunset, the direction overhead is the direction in which the earth is heading. This means that a waxing half moon will be overhead at sunset, and a waning half moon will be overhead at sunrise, taking the observer's latitude into account.

Let's express the path of the moon around the earth, relative to the sun, in degrees and quadrants. Let 0 degrees be the new moon, 90 be waxing half moon, which is also called "first quarter". Let 180 be the full moon and 270 be the waning half moon, also known as "last quarter". This fits in with the posting on this blog, "New Trigonometric Functions", in which I proposed a function based on 180 degrees, "The Lunar Function", in addition to the standard 90 degree functions.

Here is a link to a diagram of the moon's phases: www.moonconnection.com/moon_phases.phtml

There are three gravitational zones that we will deal with in a trip between the earth and the moon. Simply that where the earth's gravity is the strongest influence on the spacecraft, that of the moon and, that of the sun. At the moon, the sun's gravity is more than twice as powerful as that of the earth so that the majority of the trip will be spent in the sun's gravitational zone.

The concept that I want to discuss today is my vision of optimum points of departure and return on opposite sides of the moon's path around the earth, about two weeks apart. The great advantage of this is that most of the flight will be simply letting gravity do the work for us. The spacecraft can be made to literally "fall" toward it's destination. First, to the moon, and then the return flight to the earth.

When a spacecraft is on the way to a destination toward the sun, such as Mercury or Venus, the sun's gravity can be put to work. At launch, the spacecraft is essentially a part of the earth orbiting the sun. If we point the engines of the spacecraft in the direction of the earth's orbital path, it will counteract the orbital momentum around the sun that the spacecraft has. This will cause it to lose orbital momentum and literally fall toward the sun, and it's destination.

My thought is that a launch early in the morning, some time after third quarter (waning half moon), would definitely bring about the best flight efficiency. Once the spacecraft left the earth's gravitational zone, the gravity of both the sun and the moon would be working for us, as well as the earth's rotational momentum. Assuming that the flight takes a few days, this would land the spacecraft on the moon with the new moon approaching, in which the side facing the earth is in the dark. Once we are more experienced at lunar landings, this should not be as much of a problem as it would have been in the days of the Apollo landings.

A first quadrant launch, between new moon and first quarter, is also a possibility. But this will make it necessary to speed up the spacecraft, to get ahead of earth in it's orbit around the sun. This would be less efficient than simply losing orbital momentum so that the spacecraft literally falls toward it's target.

I see the return flight back to earth as being best as we approach full moon, after first quarter. the gravity of both the earth and the sun would be working for us. Whereas if we went to the moon near full moon, this most powerful gravitational combination would be working against us. The return trip should be easier, simply because the gravity of the earth is much more powerful than that of the moon.

So, if we approach the moon between last quarter and new moon, and return between first quarter and full moon, all we need to do is to lose orbital momentum by pointing the rocket engines in the direction of the earth's orbit around the sun so that the rocket thrust counteracts this orbital momentum, and we will literally fall along either journey. Gravity will do most of the work for us. We must always consider the tremendous gravity of the sun so that we aim to hit the moon while it is toward the sun, and return to earth when it is toward the sun, relative to the moon.

There is another thing to consider for lunar flights. The Apollo lunar missions from Cape Canaveral in Florida first went into an equatorial orbit around the earth and then, upon arrival, and equatorial orbit around the moon. The landing sites were thus relatively close to the moon's equator.

Another possibility is using polar orbits. This would be more complex, in that we would have to calculate the trajectory in another dimension also, that of north-south. The moon would be approached so that the spacecraft would be to the north or south of the plane of the moon's path around the earth, and it would go into orbit over the moon's north and south poles rather than around it's equator. To accomplish this, we could make use of the 5 degree difference between the planes of the earth's orbit around the sun and the moon's path around the earth.

There are disadvantages to the polar orbit route. There is no rotational momentum to build on during launches, and the mission is simpler if all is kept in the same plane. But a polar approach would make it much easier to land on any specific site, both on the moon and the return to earth, instead of just in the zones around the equators.

One more thing to remember during spaceflights like these. The posting "The Effective Center Of Gravity" on my physics and astronomy blog, http://www.markmeekphysics.blogspot.com/ , explains why the commonly-held belief that the center of mass and the center of gravity for a moon or planet is the same thing, must be incorrect.

In that posting, I explained that while the center of mass will be constant, the effective center of gravity will vary with our distance from the moon or planet. The two will be the same only if we are an infinite distance from the moon or planet. This is because the near side of the planet is closer to us so that it must have a greater gravitational influence on us than the far side. The closer we are to the moon or planet, the closer is it's center of gravity to us and the more difference between the center of mass and the center of gravity.

The Greatest Waste Of Fuel And Energy

It is well-known how inefficient car engines really are. What this means is that, no matter how well an engine is designed, most of the energy in the fuel that is released by combustion in the cylinders goes to produce heat, instead of useful mechanical energy. This is why an engine requires a cooling system. The engine would not take long to self-destruct if the excess heat could not be absorbed by liquid coolant and dissipated through the radiator.

What this means is that whenever you stop to put that expensive fuel into your tank, the majority of the energy in that fuel, which will be released in the engine, will not go into getting you to your destination. Rather, it will go toward heating up the coolant in your engine so that the heat can be dissipated by the radiator.

To be sure, this engine heat is not all wasted. Engine heat is useful in that it reduces the viscosity of the oil, enabling it to flow more readily to lubricate the engine parts. This is why the worst wear and tear on the engine occurs when it is started, before the oil has begun to flow. This also explains why an engine will wear faster in a cold climate.

Engine heat also warms the incoming air from the air filter, so that it can hold more vaporized (vapourised) fuel on it's way into the cylinders for combustion. Does anyone remember older cars, in which a choke would be closed upon starting to block incoming air so that the fuel-air mixture would not be too lean? The choke would then be opened after the engine had gained some warmth.

Finally, of course, the engine heat warms the passengers in cold weather when the antifreeze/coolant is circulated through small radiators in the passenger compartment. Readers in colder climates may have noticed that there are trucks in the winter with cardboard placed over the radiator grill to conserve heat in the engine.

Nevertheless, the fact is that most of the energy in the fuel that is released by combustion in your engine goes to produce heat, and most of that heat is wasted by dissipation through the radiator. What would it be like if we could only save or make use of this tremendous amount of wasted energy?

When combustion takes place in a cylinder of the engine, we want the combustion to be as rapid as possible. The quicker the combustion, the more mechanical energy is produce by the force of the rapidly expanding exhaust gases against the piston, as opposed to heat. The slower the combustion, the more energy is ends up as heat instead of useful mechanical energy.

Efforts have been made to speed combustion by using dual spark plugs in the cylinder, or by using small lasers to initiate combustion instead of spark plugs. But no matter how quickly combustion can be made to take place, or how efficient the engine can otherwise be made, most of the energy in fuel ends up as heat, and not as useful mechanical energy. The noise that is produced by an engine also requires energy, and is another route of waste.

This waste heat cannot be converted into mechanical energy, as it stands now. Saving heat wouldn't do any good either, if we insulated the engine to hold in heat it would only run hotter and not more efficiently.

But what if we could develop a small and efficient boiler, and replace the radiator with it? A boiler can generate electricity by using steam to move a piston. When boiling water under pressure is suddenly given the chance to expand, it will vaporize (vapourise) into steam which will exert great pressure on the piston, which would then move to create relative motion between a magnet and a coil of wire. We would logically use water, instead of the usual engine coolant, because the coolant raises the boiling point of water so that more energy would be required to make the steam. The energy which would otherwise be lost as heat from the engine could then be recouped as electricity.

The trouble is that small boilers are inefficient. The reason for this is that surface area is two-dimensional, while volume is three-dimensional. So as the size of something is reduced, the volume decreases faster than the surface area. It is from the surface area of a boiler that heat leaks away as waste. So, a smaller boiler will lose a greater proportion of heat, because it has a greater surface area relative to volume, and will thus be less efficient than a larger boiler. If you have ever seen a demonstration of a steam locomotive, it is easy to feel how much heat it throws off. This heat is, of course, waste.

However one crucial difference between a boiler and an internal combustion engine, with regard to efficiency, is that holding in heat by insulation will improve the efficiency of a boiler, while it would only make the engine run hotter.

Suppose that a car had a small, but efficient, insulated boiler which would use the waste heat from the engine to produce electricity. This boiler would not need to be any larger than the radiator is, which the boiler would replace. The electricity produced by this boiler would charge batteries, until there was enough electricity stored in these batteries to run the car on an electric motor, or four small electric motors, one at each wheel.

Think of pre-nuclear submarines. There used to be both a diesel and an electric motor on these boats. The sub would run on the diesel motor while on the surface. All the while, this motor would be re-charging the ship's batteries. Then, while the sub was submerged, it would run on the electric motor, drawing it's energy from the charged batteries. The electric motor, unlike the diesel motor, can operate underwater because it has no need of air.

Just think of the fantastic amount of energy that this would save. I think that this is a whole new avenue in the search for energy, and the effort to lessen global warming. In fact, I am sure that is is possible to design a dual engine which could operate either as an internal combustion engine or as an electric motor, so that the car would not require a separate electric motor. The car could automatically switch between electric and combustion modes according to the amount of electricity stored in the batteries.

Cars originated when fuel was cheap. Those days are long gone, but we are still using the same basic design of car. It's time for some new thinking.

Our Solar Future

Solar energy is already in widespread use across the world. Yet, we clearly have a very long way to go before it reaches anything like it's full potential. It seems to me that the state of solar energy today is very similar to that of electricity as a whole, back in the Nineteenth Century.

The first electric currents to be produced and controlled by people came from chemical reactions. A device made to produce electric current by chemistry is known as a battery. But batteries were only the beginning of electricity as we know and use it today. The vast majority of the electric current that we use is generated in some way. These generation methods can range from hydroelectric dams to use of steam produced by a boiler to turn a magnet relative to a coil of wire so that the magnetic lines of force will move electrons in the wire, resulting in an electric current.

Solar energy is used today to produce electricity primarily by solar cells. As the term implies, these cells are compact devices which take in energy from the sun and use it to produce an electric current.

Solar cells have proven to be extremely useful in a wide variety of applications, including spacecraft. But solar cells have a lot more in common with batteries than that they both give us electricity. Just as batteries were only the beginning of electricity as we use it today, solar cells are only the beginning of the "Solar Revolution".

As useful as batteries have always been, they are useful only for small-scale applications. They are wonderful for vehicles and portable devices of all kinds, but we cannot possibly power a city, or an entire country, with batteries. Batteries gave us the experience and understanding to handle electricity as required for the electrical grids across the world, but those grids were not possible until large-scale ways were developed to generate electricity.

I find that the same is true of solar cells. As useful as they are, they are necessarily only the beginning in the same way as batteries once were. A fortune in solar energy falls on the roof of any large building, even in the winter. Yet, solar cells are just too expensive to make to be a practical solution to harvesting this energy, except on a very limited scale. Just as the development of our modern electrical grids required methods of large-scale generation, the harvesting of solar energy will require large-scale methods which go far beyond solar cells.

Countries vary widely in their attitudes toward, and use of, nuclear power to generate electricity. But it is often said of nuclear power that no matter how much science there seems to be involved in it's use, all that nuclear power really comes down to is just another way to boil water. The steam from boiling water is used to provide mechanical energy which turns a magnet inside a coil of wire to generate electricity. It makes no difference whatsoever whether the heat to boil the water comes from a coal boiler or from a nuclear reaction.

So, if any method of making large quantities of water boil can be used to generate electricity, what about the sun? There are solar reflectors which concentrate the sun's energy to cook a meal, or to cut through a piece of steel, why can't we just concentrate the energy from the sun to boil water? We could then generate vast amounts of electric current.

Everyone has seen how the rays of the sun can be concentrated to a point by a magnifying glass. Why can't the roof of a generator building be constructed as a lens in the same way? The entire roof could be made of glass or plastic panels, assembled so that they focus the energy of the sun to a point inside the building. The roof would not necessarily have to be circular, it could be rectangular and focus the sun to a line, rather than a point.

The solar energy would be focused on a boiler, which would then produce electricity in the same way as a boiler heated in any other way. If any of the roof panels was knocked out of alignment by wind, for example, they could be recalibrated. There could be a reflector on the side of the roof opposite the sun, to bring in even more energy. Heat is of longer wavelength, and is less precise than light, so it would not be that important whether or not the energy was brought to a sharp focus. Of course, the solar roof would have to be kept clean and snow-free.

It is true that the larger such a generating plant was, the more efficient it would be. This is simply because large boilers are more efficient than small boilers. A boiler inevitably loses some waste heat through it's surface area, and the volume is three-dimensional while the surface area is only two-dimensional. This means that the volume changes faster than the surface area as the boiler changes in size, so that a larger boiler has less surface area per volume, through which to lose heat, than a smaller boiler.

There are certainly other possible variations on the plan of a solar generating station. Instead of a roof, acting as a lens, the boiler could be installed over a concave mirror which would focus the energy of the sun on it.

There could be smaller-scale generating facilities as "solar bubbles", with a boiler and generator inside.

Perhaps the easiest and most practical design of all for harvesting the abundant energy that comes from the sun is a long pipe, suspended above the ground, with a concave reflector underneath it's entire length. The sun would shine on the reflector, which would focus it on the pipe. The pipe would naturally be dark in color (colour) to absorb the maximum amount of solar energy. The pipe could be set up in either a straight line, or the more likely twists and turns. Cold water would go in one end of the pipe, and boiling water would emerge from the other end. The boiling water would then be used to generate electricity.

But whichever design is used, it seems very clear that while solar energy can provide us with all the energy we need, we are in the same place now with solar cells as we were with batteries in the early days of electricity. Some method of large-scale generation is the next step.

The Australia Sequence

There is a posting on this blog titled "The Other Side Of Global Warming". The posting discussed a side of global warming that does not get anywhere near as much attention as the increasing amount of carbon in the air. The removal of hundreds of millions of trees to make way for development has reduced the carbon sink that trees provide. Not only are we putting more carbon in the air, we are removing the trees that would have absorbed carbon from the air during their growth.

One of the perils of even a slight overall warming of the earth is extremely powerful and destructive hurricanes, and other wild weather. Today, I would like to discuss yet another side of global weather that does not get much attention.

In the posting on the meteorology blog, "The Atlas Barrier", we saw why there is a gap in the coastal barrier islands of the eastern and southern U.S. in the states of South Carolina and Georgia. It is because dust from North Africa is essential to provide condensation nuclei upon which water can condense to form the vast amount of dense cloud necessary for hurricanes. As the dust is swept out over the ocean by the east wind, it allows evaporated water from below to condense upon it. I identified the absence of significant barrier islands in those states as being due to the blocking of the wind-borne dust by the Atlas Mountains of Morocco.

This goes to show the vital role of dust in forming hurricanes. Air with the ordinary low level of dust cannot hold the tremendous volume of water necessary for hurricanes. That posting describes a hurricane as a self-sustaining circular storm. Hurricanes, which also go by other names such as "cyclone" and "typhoon", move generally westward because their spin, which they pick up from the spin of the earth, makes them semi-independent of the earth's gravity so that the earth rotates eastward under the storm.

It seems to me that dust is a forgotten side of extreme global weather. It is well-known that global warming creates wilder weather by causing more water to evaporate, but the air could not hold the water for long without abundant dust particles to serve as cloud condensation nuclei. The phenomenon of desertification, the increase of desert area, is pointed to as reducing arable land, but it also means more potential dust in the air to seed hurricanes if the prevailing winds are right to take the dust out to sea.

The way I see it, ground zero for global climate with regard to dust and extreme weather begins in Australia. Never mind the cricket rivalry between Australia and India, those two lands are linked not only by geology, they were once part of the same land mass, but by climate. It is dust from Australia, swept out to sea, that seeds the Monsoons and cyclones that afflict India and it's neighboring countries, in the same way that dust from North Africa is the foundation for the hurricanes that cross the Atlantic Ocean. The typhoons of the South China Sea are also seeded by dust from Australia. The prevailing winds over Australia carry it's loose dust northward, toward the equator.

Here is a map link: http://www.maps.google.com/

Australia is a dry continent, that is getting even dryer. There are areas which used to be farmed productively, which now cannot be farmed at all. The Government of Australia, along with that of China, is a leader in searching for ways to produce rain. This can only mean more dust becoming available to be swept out to sea by the wind.

When there is more dust in the air, it does not mean that more water will evaporate from the sea below. There is essentially no more water in the air, the dust just makes it more concentrated. This means that when the tremendous volumes of rain fall on the Indian Subcontinent, the air emerges dry as the prevailing wind at that latitude moves from the east. So that when the air gets downwind to the Arabian Peninsula, there is little or no water to provide rain.

If there was rain on the Arabian Peninsula, it would be lush and green with vegetation. Much of the water would reevaporate, or be transpired by the plants, and travel further west on the east wind to fall on the Sahara Desert of North Africa. Then if this area was lush and green, it would not be the source of dust that it is to seed the hurricanes that cross the Atlantic Ocean.

Can you now see how Australia is ground zero for so much of the global climate? it is the beginning of what I have termed "The Australia Sequence". If only we could make Australia lush and green, or even pave it over into a vast parking lot, so that it would not serve as a reservoir of dust, it would completely change the world.

(I am just using the parking lot as an illustration, the last thing that I would want to do would be to pave over Australia and upset Australian readers).

There would not be the dust to seed typhoons in the South China Sea. India, and neighboring countries, would not get the cyclones and extremely heavy rain. There would be water in the air to be carried along the east wind to fall as rain in Saudi Arabia. When that water reevaporated, it would carry further along the east wind to North Africa. The Sahara would become green, and would no longer be a source of dust to seed hurricanes heading for the western hemisphere.

What a better world that would be! This would most likely bring a potential increase in the world's food supply of between a quarter and a third, due to the vast increase in arable land.

If only we could make Australia into a lush and green place, it would cease to be a source of dust. India would get much milder rains, and the water that was left would fall on the Arabian Peninsula. Much of that would re-evaporate, or be transpired by plants, and would travel further downwind to fall on North Africa. This would, in turn, make what is now the Sahara Desert into a green place with thriving plants, meaning that North Africa would no longer act as a vast supplier of dust over the ocean to seed the hurricanes that afflict North America and the Caribbean.

It is easy to see why the east coast of South America is free of hurricanes, much unlike North America. Africa south of the Sahara is a land of expansive jungle and grassland. It is not dusty, and so does not supply the dust that would seed hurricanes that would move westward to strike South America. If only we could get Australia covered with plants, we would set a very beneficial sequence in motion.

By the way, this new plant life covering Australia, Arabia and, North Africa would absorb much of the world's carbon in the air that is the cause of global warming. We would definitely be "killing two birds with one stone", and really bettering the world in the process.

The main reason that Australia is so dry is that the winds, which might bring rain, are blocked by the Great Dividing Range of mountains, along the east coast of Australia. However, in east-central Australia lies the Great Artesian Basin. This is a vast area of low-lying land, which is Australia's main source of fresh water from wells. Much of the basin is below sea level, my world atlas has the surface of Lake Eyre North as 16 meters below sea level.

What if a canal could be dug, which would flood part of the Great Artesian Basin with sea water? This would form a vast, shallow salt-water reservoir west of the coastal mountains. This water would quickly re-evaporate to be carried westward by the prevailing winds.

Hopefully, it would fall as rain on the vast arid west of Australia. Plants would grow, and farming would thrive. The deserts would become grasslands and the continent would cease to be a source of a significant amount of dust to the region's large-scale weather patterns.

The area is sparsely populated anyway. This salt-water reservoir, covering hundreds of square kilometers would provide beaches and sites for resorts. Although it would not have the waves required for the Australian pastime of surfing. As fish-farming is becoming so popular across much of the world, the reservoir could be stocked with fish.

The reservoir would be shallow, and if Australians ever changed their minds about the project all that would be necessary is to close the canal and the water would not take long to evaporate.

The only drawback of the reservoir is that it would salinate the fresh water within it's shores. But this would be more than compensated for by the fresh water falling westward as rain.

This concerns the proposed project of digging a canal in Australia to flood part of the Great Artesian Basin with sea water. This is my idea to bring rain to Australia by bringing a large surface area of water west of the Great Dividing Range of mountains, since it is these mountains which block the east wind which would otherwise bring rain to this very dry land. This water would evaporate and fall as rain on the vast expanse of western Australia.

I believe that it is an accident of geology which prevented Australia from being the lush and green land that it could have been. This project could really change the world. China and Australia are already tied together economically. Australia is a source of raw materials for China, as well as a destination for Chinese tourists. Many signs in Australia are in Chinese.

Why not work together on this Project? There is a serious water shortage in parts of China. This project would result in milder rains that would carry much further inland in China, instead of the destructive typhoons along the coast. The governments of both countries have put a lot of effort into trying to induce rain artificially, why not try this idea?

It would be ideal if we could set up a parallel project to bring water to North Africa, the world's other great source of dust, as well. But there is no comparable area below sea level there which could be flooded. There is the Qattara Depression in Egypt's Western Desert, but I do not think that flooding it would have much effect on the rainfall.

Do We Really Need Calculus?

I once took a class titled "Calculus-Based Physics". I was still learning calculus, and was more adept with spatial mathematics like geometry and trigonometry. I could not help noticing that just about anything that can be solved with calculus can also be solved without calculus. We live in a spatial universe, and the graphing used in calculus is just another way of solving spatial problems.

I find that an under-appreciated gem of basic physics is the Inverse Square Law. The Inverse Square Law states that an object that is twice as far away will appear as one-quarter the size or, if two radio antennae are broadcasting with equal strength, the signal from one twice as far away will have one-quarter the strength of the one that is closer.

If we look at a building some distance away, for example, the result is an isoceles triangle (one with two equal angles) with the observer at the point of the triangle and the width of the building forming the base of the triangle. This could also be expressed as a right triangle (one with a right angle) with the height of the building as the vertical axis of the triangle. The Inverse Square Law applies in that, if the building were twice as far away from the observer it would appear to the observer as having only a quarter it's former width, or height.

The reason for the Inverse Square Law is that the circumference of a circle is pi (3.1415927 is as many decimal places as I have it memorized) times the diameter of the circle, and the diameter is twice the radius. This means that if we double the radius, which represents the distance to the object, there are now four times the original radius in the diameter of the circle that the object lies on, with the observer at the center.

So, why can't we make use of the Inverse Square Law when dealing with anything that forms a triangle? It does not necessarily have to involve an actual spatial triangle, this law of physics can be applied to anything that forms a triangle in it's pattern of events. This opens up a whole world of possibilities.

Actually, anything which changes at a steady rate forms a triangle in pattern. Picture a right triangle, or a cone. If some entity begins at zero, and proceeds at a steady rate to some maximum, it can easily be expressed as a triangle. Let's replace the triangle formed by the observer looking at the building with the beginning at zero replacing the position at the point of the observer, and the maximum replacing the building.

Now, let's have a look at fractions. I find that fractions represent the way reality really operates. We count in tens, and so we prefer decimal expression. But that is an artificial numbering system and use of decimal tends to make patterns in numbers less apparent than if we used fractions.

Using a simple example of the Inverse Square Law, we can see that triangles have a very useful relationship with the squares of fractions.

Suppose that we have a right angle between two lines. The vertical line has a length of four units, and the horizontal line a length of six units. Let's draw a line from the end of the horizontal line to the top of the vertical line to form a right triangle. The triangle would have an area of twelve square units, since the triangle is half of what a square involving the two lines would be and such a square would have an area of 4 x 6 = 24 square units.

Next, let's consider the half of the horizontal line from the furthest point and moving toward the vertical line. This is the narrowest half of the triangle along the horizontal line. The vertical dimension of the triangle would be zero at the beginning point, and two units at the halfway point of the horizontal line of the triangle. This is because the vertical dimension reaches it's maximum of four units, and we have gone halfway there from the opposite point of the triangle.

The area of this narrow half of the triangle, along the horizontal axis line, would be half of six because 6 = 2 x 3. Thus, the area of the narrow horizontal half of the triangle would be three square units.

Do you see the Inverse Square Law at work? Starting at the narrow end of the triangle, we proceeded halfway toward the wide end of the triangle. In doing so, we passed one quarter of the area of the triangle because the total area of the triangle is twelve square units and the narrow half of the triangle, along the horizontal axis line, has a volume of three square units.

This means that we can do all manner of measurements of anything forming a triangle using the squares of fractions. If the narrow half of a triangle (or cone) contains one quarter of it's area or volume, it must mean that the widest half of the triangle contains 3/4 of it's area or volume. Likewise, the narrowest 1/3 of a triangle contains 1/9 of it's total area or volume. The widest 1/9 of the triangle contains 1/3 of it's total area or volume, and so on.

Now, let's move on. No one says that this very useful Inverse Square Law has to be limited to actual spatial applications. It must also apply to anything that forms a triangle in pattern, even if it does not involve an actual triangle in space. When you think about it, anything that proceeds between zero, or a minimum, and a maximum forms a triangle pattern if it were displayed on a graph.

An object in motion with a steady acceleration or deceleration forms a triangle, with the minimum at the point of the triangle and the maximum at it's widest part. Of course, if the minimum is other than zero, all we have to do is to add a rectangle beneath the triangle so that the width of the rectangle represents the value of the minimum. The most common use of calculus is to measure change, and change proceeds between a maximum and a minimum.

A falling object forms a definite triangle. The acceleration of falling due to gravity is the well-known 32 feet per second squared ( I won't convert this to metric because it is easier to express in feet). This means that if an object is dropped, it will go into the first second of fall with a velocity of zero feet per second and end the first second with a velocity of 32 feet per second, with the increase coming at a steady rate. This means that the average velocity of the object, in it's first second of fall, will be 16 feet per second. So, it will fall 16 feet in the first second.

The object enters it's second second of fall with a velocity of 32 feet per second, and ends the second with a velocity of 64 feet per second. This means that it's average velocity throughout the second second of fall was 48 feet per second. So, it fell 48 feet in the second second of fall.

In two seconds, the object has fallen 64 feet. The 16 feet that it fell in the first of the two seconds is 1/4 of 64. Can you see the triangle that is formed in this pattern, and the applicability of the Inverse Square Law?

(By the way, this 16 feet would be a very useful unit of vertical measurement because of how it relates to the velocity of falling objects. I named this unit a "grav", for gravity, and described it's use in the posting on this blog, "The Way Things Work", and in the book "The Patterns Of New Ideas").

A rising ballistic object forms a triangle in reverse to that of a falling object. Throw a ball into the air and it will form one triangle on the way up, by starting at a maximum vertical velocity and proceeding to zero as a result of the action of gravity, and then another triangle on the way down as it's velocity starts from zero, at the maximum altitude, and proceeds to a maximum.

Something like a ball rolling across the ground, with a steady deceleration, also forms a neat triangle that can readily be measured with this method.

What about a dam holding back a body of water? The pressure of the water against the dam also forms a triangle. The water pressure starts at zero at the surface of the water, and proceeds steadily to a maximum at the bottom of the water.

Anything spreading steadily along a circular front, such as an oil spill, forms the base of a cone in pattern that we can easily measure using this method. When half of the time from the beginning of the spill, if it was from an area of zero, to now had elapsed, the area covered by the spill was 1/4 of what it is now. We can also measure withdrawal at a steady rate in the same way.

Possibly the most useful application of the Inverse Square Law and the squares of fractions involves the total earnings of money which earns interest, with the interest rolled back in. This also forms a triangle, increasing at a steady rate between between minimum and maximum.

To find the sum total of any calculation, how much distance has been covered in the case of velocity, or how much money has been earned in the case of interest, just form a triangle and find the area under the triangle.

So far, we have seen how calculations can be done on anything involving change at a steady rate by using the Inverse Square Law of fundamental physics and basic fractions, with no need whatsoever to use calculus. But now, let's get a little bit more complicated.

It is easy enough to do measurements involving constant change, such as acceleration. But what if the rate of change is itself changing? For example, a graph of velocity will appear as a straight horizontal line for constant, unchanging velocity and a slanted line for constant change in velocity (acceleration or deceleration). But if the rate of acceleration was also changing, a graph of the velocity would show as a curve. The area under the curve would represent the total distance travelled. The trouble is that we do need calculus to find the area under a curve.

But a simple curve is the synthesis of two straight lines, with one of the lines changing in length, which forms the two axes of a triangle. There may be constant acceleration, or change of some kind, which would be expressed as a straight slanted line on a graph, which could be the hypotenuse of a right triangle. But there may be a change in the rate of acceleration, or a change in the change in the rate of acceleration. There may even be a change in the change of the change in the rate of change or acceleration.

To dispense with calculus, all we need to do is to arrive at triangles on our graph so that we can easily find the total distance travelled (or money earned, etc.) using ordinary geometry. No matter how complex the curve, we can find this by simply using multiple triangles and then adding their values to get a total. Of course, we would subtract the value that we get from the area under a triangle if it represented a negative value, such as deceleration, instead of positive acceleration.

Suppose that we wanted to find the total distance travelled by a moving object over a given period of time. But, the velocity of the object was contantly changing.

We would start with one triangle representing the acceleration of the object at the beginning. It would be graphed as a rectangle if it were a contant velocity, without acceleration.

If the object began to accelerate at a given point in time, we would start another triangle beginning at an axis representing the point in time at which the acceleration began.

If that acceleration rate was changing, rather than acting at a contant rate, we would set up another triangle representing that change, and continuing between the appropriate points in time represented by the common vertical axes of the triangles.

If there was a change in that rate, we would set up yet another triangle to represent it. If there was deceleration, we could set that up with the common time axis at the top, instead of the bottom of the graph, and subtract that from our final total rather than adding it.

Isn't this easier, and more enjoyable, than using calculus?