External ballistics is the study of how an object moves through a medium such as a shell traveling through the air. The most important elements to consider when trying to predict where a flying projectile will land are those which exert forces upon it. The forces produce accelerations according to classical Newtonian rules, and these accelerations alter its velocity over time, and the velocity acts to alter its position over time. That should be obvious to any student of physics.
I am no great shakes at physics, to be honest, but I will offer here my best current understanding of the physics as understood by the Royal Navy during World War I and offer some thoughts on how I personally tackled these physical systems when producing a simulation.
Forces acting upon a shell
The two most powerful forces on a shell are the force of gravity and the aerodynamic drag force caused by the shell having to push its way through the air. Gravity is pretty easy to characterize, as it is essentially a constant and always directed downward. Drag is trickier, as it depends upon the shape of the shell and the speed of sound at the shell's present location. Secondary factors worth considering are wind and ballistic drift caused by the shell's spiral flight imparted by the rifled gun. Lastly, Coriolis effect and some relatively minor aerodynamic effects also affect the shell.
Although the scientists of the day were fully capable of computing the effect of such forces at any given moment -- a feat repeatedly demonstrated in manuals in a morass of the most appalling mishmash of systems of measure where angles are measured in degrees, minutes and seconds, velocities variously in feet per second and knots, etc -- they lacked any means of simulating the accumulated effect of these instantaneous influences over the course of a shell's trajectory.
The approach they took was to learn all they could about an idealized shell type through experimentation, and compiled these findings into O. B. Ballistic Tables (where O.B. means "Ordnance Board"; previously, a different set called "Ingall's Tables" were employed). When proving a new weapon system, they fired enough test shots at various angles of elevation that they could see how the gun differed from this idealized weapon system and then provide a few simple fudge factors ("coefficient of reduction") that allowed the idealized data to be distorted into a description of the new weapon despite having conducted very few test firings. Their ability to do this with high precision is a testament to the strong background they had in mathematics, though even a neophyte such as I gets the idea that some seat-of-the-pants guesstimation was woven into the process.
The 1918 Range Tables for His Majesty's Fleet (ADM 186/236) contain preface notes (p. 7) indicating the extent of the Admiralty's confidence in these approximations, specifying that the tables were created based on O. B. Ballistic Tables compiled in June 1903 at Shoeburyness, indicating that discrepancies attributable to the underlying use of Sciacci's formulae can be expected to start at elevations of 10 degrees and to become somewhat onerous past 15 degrees elevation.
I have not found much information on ballistic drift as a physical phenomenon, and indeed it is hard to find specific data on a given weapon system for how far the shell would deviate to the right as it travelled to different ranges. However, the 1918 Range Tables offer a little explanation for how the Royal Navy treated drift and some of its tables include fairly complete drift data although one might be suspicious as to which are experimentally obtained and which are the result of their mathematical model for drift.
On pages 7 and 8 of this volume, they explain that the length of the projectile is a primary factor for different shells exhibiting different drift behaviors. But, somewhat ambiguously, on page 8 it offers a mathematical formula chosen in 1916 to express drift which boils down to
but immediately offers the caveat that not all tables have drift expressed in this manner, as determination of the constant for all guns was not possible before going to press. The above function puzzles me, as it just seems so cavalier to have a rule of thumb taking care of deflection altogether. But closer thinking seems to indicate to me that the formula here is that deflection produced NOT necessarily by the gun's performance, but by a simple sight whose deflection feature is implemented through the simple expedient of inclining the sight by an angle that best matches the actual drift of the gun. But I'm surprised that range tables did not include deflection data based on a few test firings at different ranges and with other values plugged in after interpolating their likely best values.
Digging deeper, if we look at the drift data in one range table, we find that it does not seem to imply that it has data spat out from the formula above, unless the first few samples are distorted due to rounding errors (the first 5 have a single significant digit):
However, if one instead chooses a drift model in which the lateral deviation due to drift is assumed to be the result of a constant force acting upon the shell throughout its time of flight at the given range, one seems to get an even better fit.
My conclusion from seeing the above, is that if I want to prepare a simulation that defers authority on the drift behavior of a weapon to this range table, I would take the most extreme range drift data available, note the time of flight to reach that range, and dictate with a high degree of personal satisfaction that the physics code should apply a lateral fudge force to the shell such that the shell accelerates in such a manner that its lateral deviation matches the example firing. The deviation at all points that the virtual gun would fire would be highly realistic, unless of course someone finds more authoritative data on drift for this or a similar shell.
Often called "Coriolis Force" despite the fact is merely a geometrical effect caused by firing shells from one latitude to a different latitude. The result is an apparent deflection error caused by the fact that the shooter and target are moving around the globe's axis of revolution at different radii.
Discussing it in detail is beyond the scope of this little essay, but I am inspired to mention it because someone asked me to explore an urban legend that the Royal Navy's shooting at the Battle of the Falklands was atrocious owing to their equipment applying corrections for Coriolis effect in the wrong direction, as the action was in the southern hemisphere rather than the northern. To the best of my knowledge, no aspect of Royal Navy equipment or process took Coriolis effect into consideration at this juncture, and this is not a terrible deficiency. For, even if the old story were true, if the action took place on a nearly constant bearing, and at a range that changed only slowly, even a blatant mistreatment of Coriolis effect would therefore have been a constant error, and one unlikely to be large compared to other factors affecting the proper deflection to use (such as the zig-zagging of a fleeing enemy). This fact implies that the remedy for such a miscue would have been a spotting correction for deflection which, once made, would counteract the error for the remainder of the action.
While I think it likely that later systems of firing incorporated Coriolis corrections nicely, a system lacking such treatment which is designed primarily to bring fire upon a maneuvering enemy is not a sad system by any means. Taken in context, Coriolis errors are a constant source of deflection error and quite modest in scale. The need to fire repeated salvoes which for many reasons will require spotting to put them onto the target implies that a failure to handle Coriolis effect, or even a failure to handle it correctly, does not imply an inability to hit the target.