Uncovering the Hidden Information: A Novel Approach to Modeling Physical Phenomena Through Information Theory

,


Introduction
In modeling, researchers play an exceptional role in achieving high fidelity in the reproduction of physical objects. His subjective perception may not correspond and may even be erroneous in assessing the true physics of the phenomenon. Numerous validation and verification methods have been used to determine the plausibility of the constructed model based on modern statistical approaches (Stranden et al., 2007).
However, to date, no principle or method that is recognized as a universal criterion for the credibility of a model has been proposed. The laws discovered by brilliant scientists (the law of Archimedes, Newton's laws, Einstein's formula) seem to testify in favor of the need to search for models and patterns that are physically clear, have a small number of variables, and are simple. Nevertheless, it's important to remember that laws are simply models built on assumptions, which are sensitive to both qualitative and quantitative variables and remain true only within the accuracy obtained in experiments. As a result, it's challenging for future generations of scientists to envision simplicity as an absolute criterion for the inalterability and immutability of scientific laws that remain subject to ongoing investigation.
The accuracy of the model for physical laws depends significantly on its structure, which includes the number of variables and functional relationships between them: 1. The number of variables: Adding more variables to a physical law model can increase its complexity and make it difficult to accurately determine the behavior of the system being modeled. Adding more variables to the model can increase uncertainty and noise, making it harder to isolate the impact of each variable on the system's behavior. Additionally, more variables may require more data to calibrate the model accurately, which can be difficult or expensive to obtain. relationships between variables tend to be more accurate than those that use complex or poorly understood relationships. This is because simple relationships are easier to fit into the data and are less likely to introduce spurious correlations or other sources of error.
3. Assumptions and simplifications: Physical laws models often simplify assumptions to reduce their complexity and simplify the analysis. Although these simplifications can be helpful, they can also introduce inaccuracies in the model. For example, a physical law model may assume that the system is perfectly symmetrical, or that some variables can be treated as constants. These assumptions may be reasonable in some cases, but they can also lead to errors if not carefully checked.
For the model of a physical law to be as accurate as possible, it is important to carefully consider all of these factors and, if possible, test the model using experimental data.
The model's accuracy is also affected by the quantity of information it contains. The amount of information contained in a model can affect its accuracy in several ways. In general, a more complex model that contains more information may be more accurate than a simpler model, but only up to a point. Beyond a certain point, increasing the complexity of the model may actually decrease its accuracy. Here are some ways in which the amount of information in a model can affect its accuracy: 1. Overfitting: If a model is too complex and contains too much information, it may fit the training data too closely and not generalize well to new data. This is known as overfitting, and it can lead to poor accuracy on new data.
2. Under fitting: If a model is too simplistic and lacks adequate information, it may fail to capture underlying patterns in the data, leading to poor accuracy.
3. Data quality: The quality of the data used to train a model can also impact the quantity of information it contains. If the data is noisy or contains errors, a model with more information may be better able to distinguish signal from noise, resulting in better accuracy.
In summary, the quantity of information in a model may impact its accuracy in complex ways. A more complex model may be more accurate up to a point, but beyond that point, increasing the complexity can lead to overfitting and decreased accuracy. Finding the right balance between model complexity and accuracy requires careful consideration of the specific problem and data at hand.
Considering the above, the construction of an optimal model structure for physical laws may include the representation of the model as a channel for transmitting information from the observed phenomenon to between the object of study and the developer . At the same time, in the modeling process (thinking act) it is assumed that the observed phenomenon is not subjected to external disturbances. Therefore, instead of the term "observer", who studies the object through measuring instruments, in what follows we use the term "thinker." Two fundamental aspects of measurement theory emerge from this statement and are radically different from traditional classical reasoning. Firstly, the thinker plays an active role in describing natural phenomena, building a model with a certain qualitative and quantitative set of variables from any system of units and based on his philosophical views. The one currently widely used is the International System of Units (SI) (Davis, 2019). Its structure and its smallest achievable uncertainty directly depend on the will of the researcher. Secondly, models created by the human mind and preceding experiments can be selected in such a way as to provide the highest possible image throughput of the object with the lowest noise level. Under the conditions of the information method, representation of objective reality with high accuracy is not limited to philosophical reasoning, but involves practical implementation. This is achieved by calculating the value of information characteristic of the optimal structure of the model of the selected phenomenon.

Determination of Model Uncertainty by the Information Approach
For the purposes of this study, it is assumed that the properties of observed and simulated material phenomena are extensive and can be characterized using various variables chosen by the researcher. These variables reflect the essence of the phenomenon and its interaction with surroundings. Furthermore, it is assumed that each variable represents the specific readout (as described by (Kotelnikov, 1933;Brillouin, 1956) that allows the researcher to obtain information about the observed object. Every equation in physics and technology is written in SI terms. Given that the SI is a complex consisting of a finite number of variables, we can calculate the number of criteria inherent in the SI: μSI = 38,265 . In addition, when building a model, the thinker, choosing certain variables and in many cases without being aware of this (it seems unconsciously and unintentionally), forms a "group of processes" (GoP), which is inherent in the formulated model. According to http://apr.ccsenet.org Applied Physics Research Vol. 15, No. 1;(Sedov, 1993, a GoP is a set of natural phenomena or technological processes characterized by a qualitative and quantitative group of variables that reflect certain properties of the observed reality. As an example, consider heat-electromagnetic processes, for the description of which variables with the dimensions of length (L), mass (M), time (T), temperature (θ) and current strength (I) are used. In this case, the model belongs to the GoP ≡ LMTθI. Thus, a "comprehensive" model of the phenomenon may include μSI. It should be noted that all the following arguments can be applied both to various models, including dimensional and dimensionless variables, and to systems of units, including various qualitative and quantitative sets of variables Menin, 2023). In this case, as variables, we will mean a "finite information quantity" (FIQ) (Del Santo and Gisin, 2019): scalar variable, constant, position or momentum variable, or dimensionless criterion whose values lie in the set of real numbers R.
The thinker, observing a natural phenomenon, researching a technological process or developing equipment, seeks certain characteristics of the observed object in accordance with his philosophical vision. With such a subjective approach, the connecting threads between the observed object and the environment are broken. In addition, the thinker takes into account a small number of variables than required by objective reality, due to lack of time, technical resources and available funds. As a result, the model of the researched phenomenon may have some uncertainty or fuzziness, which is primarily determined by the number of FIQs that are included in the model. Additionally, different research groups may take different approaches to studying the same object and may therefore focus on different sets of variables, which can differ both qualitatively and quantitatively. This can further contribute to the uncertainty and complexity of the model, making it challenging to draw definitive conclusions from the results. Hence, when dealing with physical or technological processes, it is possible that the inclusion of one or more variables in the model may be regarded as stochastic or random. To defend this seemingly counterintuitive statement, it is worth noting the ongoing debate surrounding the nature of the electron: is it a particle or a wave? This discussion highlights the fact that certain phenomena in the physical world may not conform to our intuition or expectations, and may require a more nuanced and complex understanding. The intense debates among brilliant scientists, who held differing philosophical perspectives, ultimately led to the recognition that both approaches to understanding the nature of the electron were valid. This cannot be ignored or discarded.
The quantity of information that can be obtained about a researched phenomenon is always limited due to the finite quantity of information in both the SI and the model used to represent it. This inherent limitation leads to a fundamental level of uncertainty, which can be quantified by applying the concept of entropy to the modeling process. Specifically, the special inequality can be introduced: 0 < H(Y) < H(X), where H(Y) and H(X) represent the entropies of the model and the SI, respectively. To calculate these entropies, we can use the formalism described in (Landsberg, 1986;Lloyd, 2000).
In this formalism, H(Y) is calculated in two stages. The first stage involves calculating the entropy of the chosen group of processes (GoP), denoted by HGoP(Y). This quantity is determined by two factors: z', the number of finite information quantities (FIQs) in the chosen GoP, and β', the number of base quantities in the chosen GoP. The second stage involves calculating the entropy of the model itself, denoted by Hmod(Y). This quantity is determined by two additional factors: z", the number of FIQs used in the model, and β", the number of base quantities used in the model.
Using the theoretically established link between absolute uncertainty and entropy (Brillouin, 1956), we can determine the threshold at which the mismatch between the model and the observed object becomes significant.
To quantify this threshold, we can use a criterion known as comparative uncertainty, denoted by ε .
Based on the theoretically substantiated relationship between absolute uncertainty and entropy in modeling (Brillouin, 1956), the mismatch threshold between the model and the observed object was calculated. In this case, as a criterion for assessing the minimum threshold of discrepancy between the object under study and the constructed model, the comparative uncertainty ε is proposed : where ∆ is the absolute total uncertainty of the target FIQ due to the GoP and FIQs included in the model, and S is the target FIQ change interval, which was chosen by the researcher.
ε is an important element of information theory (Brillouin, 1956), although almost no attention has been paid to it in the modern scientific and technical literature. Equation (3) is not a purely mathematical abstraction and has a physical meaning: ε is the initial conceptual uncertainty inherent in any physical and mathematical model, independent of the measurement process and due only to the amount of information in the model.
Equation (3) can be viewed as a fitting principle (uncertainty relation) for the model development process. No model can provide results that contradict Equation (3). That is, any change in the level of detailed description of the observed object (z''-β''; z'-β') causes a change in ε for a specific GoP and the possible achievable accuracy of FIQ measurement, simultaneously determining a pair of quantities observed by a conscious researcher, in particular, the absolute measurement uncertainty of the investigated FIQ ∆ and the interval of its change S.
Equating the derivative of ε (3) with respect to z'-β' to zeros the following condition for achieving the minimum comparative uncertainty for a particular GoP: By using (4), one can find the values for the lowest achievable comparative uncertainties for different GoPSI, and the values of the comparative uncertainties and the numbers of chosen variables are different for each GoPSI.
For further reasoning and calculations, we must consider the fact that the dimension of any FIQ can be expressed only in the form of a unique combination of dimensions of the main base quantities (Menin, 2019): where J is luminous intensity, F is the amount of substance, l, m... f are exponents of the base quantities and the range of each has a maximum and minimum value (Sonin, 2001;Menin, 2018].
Below are two examples of calculating the optimal value of εopt for a specific GoP: 1. For mechanics processes (GoPSI ≡ LMТ), taking into account the aforementioned explanations and (4), the lowest comparative uncertainty εLMT can be reached at the following conditions: where еl, еm, еt are the number of dimension options for each value L, M, and T, respectively (Sonin, 2001); "-1" corresponds to the case when all the base quantity exponents are zero in formula (5); dividing by 2 indicates that there are direct and inverse quantities, for example, L 1 is the length, L -1 is the run length; and 3 corresponds to the three base quantities L, M, and T.
According to (6,7) εLMT equals: Equation (8) leads to a non-trivial conclusion: in the framework of the FIQ-based approach, the optimal comparative uncertainty εopt cannot be realized using any mechanistic model (GoPSI ≡ LMT). Even one dimensionless base quantity does not allow approaching εopt. Moreover, the greater the number of variables in the model, the more the achieved model uncertainty εmod differs from εopt.
2. For combined heat and electromagnetism processes (GoPSI ≡ LMТθI), taking into account (4), the lowest comparative uncertainty εLMTθI can be reached at the following conditions: where "-1" corresponds to the case when all the base quantity exponents are zero in formula (5); dividing by 2 indicates that there are direct and inverse quantities, e.g., L 1 is the length, L -1 is the run length; and 5 corresponds to the five base quantities L, M, T, , I.
The above reasoning and calculations can be used to assess the perfection of the models of physical laws. Within the framework of the presented informational approach, we used the ratio of the comparative uncertainty εmod achieved in the model with the theoretically substantiated value εopt. The closeness of these two uncertainties indicates that the proposed model considers many significant effects in describing the process under study. At the same time, the significant difference between these uncertainties indicates an urgent need to improve the constructed model.

Physical Laws From the Position of the Informational Approach
We will consider several physical laws ((Einstein's formula, Stefan-Boltzmann's law, Hubble's law, Heisenberg's uncertainty principle and Newton's law)) from the perspective of assessing the quality of the model's structure inherent in each of them, using the comparative uncertainty of the model εmod as a criterion. While these laws have many useful applications in physics and engineering, have been verified by numerous experiments, and have revolutionized our understanding of the physical world, it is important to understand their possible limitations and the contexts in which they may not accurately describe the physical world. Table 1 presents the above laws with the names of the variables used in the model. Table 2 presents the possible limitations of the laws. 1. Limited to the speed of light: the formula applies only to objects moving at or near the speed of light. For slower-moving objects, the formula does not accurately describe the relationship between energy and mass (Hossenfelder, 2004).
2. Only applicable to objects with mass: the formula is only applicable to objects that have mass. For massless particles like photons, the formula does not apply (Friedman, 2017).
3. Energy and mass are not interchangeable: while the formula suggests that mass can be converted into energy, it is not a direct and instantaneous process. The conversion of mass into energy requires a specific process, such as nuclear fusion or fission (Flores, 2005).
4. Does not account for gravitational potential energy: the formula does not take into account gravitational potential energy, which can significantly affect the mass-energy equivalence of an object in a gravitational field (Giulini, 2014).
Heisenberg's inequality 1. Limited to quantum scale: the uncertainty principle applies only to subatomic particles and phenomena, such as electrons, photons, and atoms. It does not apply to macroscopic objects, where the uncertainty is negligible compared to the scale of the object (Castro, 2017).
2. Indeterminacy is not due to measurement: the uncertainty principle does not imply that the indeterminacy or randomness in quantum measurements is due to limitations in the measurement apparatus or techniques. Rather, it reflects the fundamental nature of quantum systems and the wave-particle duality of matter (Gregg, 2020).
3. Limited to certain pairs of observables: the uncertainty principle applies only to certain pairs of observables, such as position and momentum, or energy and time. For other pairs of observables, the uncertainty relationship may not hold or may be different (Tessarotto, 2020).
4. Cannot be violated: the uncertainty principle is a fundamental principle of nature and cannot be violated or circumvented. It places fundamental limits on the precision and accuracy of measurements and the predictability of quantum systems (Sen, 2014).

Boltzmann law
1. Applicable only to idealized objects: the Stefan-Boltzmann law assumes that the object radiates energy uniformly in all directions, which is not always the case in real-world scenarios. For example, an object with an irregular surface or non-uniform temperature distribution may not radiate energy uniformly in all directions (Blackbody).
2. Only valid for opaque objects: the law applies only to opaque objects that absorb and emit radiation, which limits its application in contexts where transparency or translucency is important, such as in optical fibers (Black-body).
3. Does not account for reflected radiation: the law does not take into account the reflection of radiation, which can significantly affect the net radiation emitted or absorbed by an object in the presence of other nearby objects or surfaces (Bimonte, 2016).
4. Limited to steady-state conditions: the law assumes that the object is in a steady-state condition, meaning that its temperature is constant over time. In dynamic or transient conditions, such as during the heating or cooling of an object, the law may not accurately describe the energy transfer and radiation emission from the object (Bimonte, 2016).

Assumes blackbody radiation:
The law is based on the assumption of blackbody radiation, which may not accurately describe the radiation emitted by real-world objects. In some cases, the emissivity or spectral characteristics of an object may be significantly different from those of a blackbody, leading to inaccuracies in the application of the law (Wellons, 2007).
Hubble's law 1. Limited to the linear regime: Hubble's law assumes a linear relationship between the distance and recessional velocity of galaxies, which holds only for relatively small distances or low recessional velocities. At larger distances or higher velocities, other physical effects, such as the expansion of the universe or the gravitational attraction of nearby objects, may become important and affect the observed relationship (MacCallum, 2015).
2. Affected by peculiar velocities: the observed recessional velocity of a galaxy can be affected by its peculiar velocity, which is the velocity it has relative to the cosmic microwave background radiation. Peculiar velocities can arise from various physical effects, such as the gravitational influence of nearby objects, and can distort the observed relationship between distance and recessional velocity (Nicolaou, 2020).
3. Limited to a homogeneous and isotropic universe: Hubble's law assumes a homogeneous and isotropic universe, meaning that the properties of the universe are the same in all directions and locations. This assumption may not hold in the presence of significant spatial variations or anisotropies, such as those caused by large-scale structures or cosmic voids (Edwin, 2015).
4. Assumes a constant expansion rate: Hubble's law assumes a constant rate of expansion of the universe over time, which may not hold in the presence of dark energy or other unknown physical phenomena that could affect the expansion rate (Freedman, 2004).
Newton's law 1. Only applicable to classical mechanics: Newton's law is based on classical mechanics and is applicable only to objects that are much larger than atoms and moving at speeds that are much slower than the speed of light. It does not accurately describe the behavior of particles in the quantum world or at relativistic speeds (Rynasiewicz, 2011).
2. Limited to inertial frames: Newton's law is applicable only to inertial frames of reference, which are frames that are not accelerating or rotating. In non-inertial frames, such as frames that are accelerating or rotating, the law may not accurately describe the motion of objects (Peraire, 2004).
3. Neglects relativistic effects: Newton's law does not take into account relativistic effects, such as time dilation and length contraction that become significant at very high speeds or in strong gravitational fields (Rivadulla, 2004).
Using reasoning similar to (6)-(11), within the framework of the FIQ-based method, the characteristics of various laws were obtained (Table 3). Conventional experience and current practice in the use of relative uncertainty do not provide any clues to the accuracy of the model's structure itself. What can be said about the accuracy of models of these laws, using the ratio εmodi/εopt as a criterion? By analyzing the data in Table 3 in terms of εmodi/εopti, we can draw the following conclusions.
The models used to represent the Einstein equation, Heisenberg's inequality and Newton's law and based on GoPSI ≡ LMT lead to very low values: εmod1 /εopt1 = εmod2 /εopt2 = εmod5 /εopt5 ≈ 0.18. This indicates that, within the framework of the FIQ-based method, many possible additional relationships with unaccounted variables are ignored. Undoubtedly, these laws represent the idea of simplicity and depth of scientific thought and are consistent with the multitude of experimental data that have been achieved, and make it possible to make predictions for the discovery of new theories. Simultaneously, it is possible to make a controversial statement that these laws can be based on the subjective view of a scientist at a deep level, which leads to the idea of revising the fundamental nature of reality. While they are very effective at predictive, there are certain scenarios where additional physical variables may need to be considered to improve their accuracy. In addition, there are still many open questions in physics that remain unanswered, and researchers continue to look for new ways to improve our understanding of the world around us. Some examples of additional factors that could be added to improve them include (Table 4): 2. Another area of active research is the study of high energy physics, where scientists study the behavior of particles at extremely high energies. In this mode, relativistic effects become more pronounced, and new phenomena can arise, such as the creation of particle pairs and vacuum polarization. By studying these effects in detail, researchers hope to better understand the fundamental laws of nature and potentially discover new physical variables that could be included in the theory of relativity (Autschbach, 2012).
3. The way to improve the theory of relativity is to keep improving our measurements of physical quantities. For example, the mass-energy equivalence relation E=mc 2 is based on the assumption that mass is a constant, but recent experiments have shown that mass can actually change slightly depending on the energy state of the particle. By improving our ability to measure mass and energy with greater accuracy, we can refine Einstein's formula and better understand the underlying physics of the universe (Borsanyi, 2021. Heisenberg's inequality 1. Quantum entanglement: Heisenberg's inequality (HI) assumes that two measurable observables are independent of each other. In fact, quantum entanglement can lead to a strong correlation between two observables, which can affect the accuracy of the measurement (Zhang, 2020).
2. Environmental noise: The HI assumes that the system being measured is completely isolated from its environment. However, in practice ambient noise can affect the accuracy of the measurement (Haase, 2018).
3. Finite measurement time: HI assumes that the measurement is instantaneous. However, in reality, measurements take a finite amount of time, which can affect the accuracy of the measurement (McGuinness, 2021).
4. Detector inefficiency: HI assumes that the detectors used to measure a system are perfectly efficient. In reality, detectors have a finite efficiency, which can affect the accuracy of the measurement (Feito, (2009).

General relativistic effects:
The HI is a principle of quantum mechanics and does not take into account the effects of general relativity, which can become significant at high energies and strong gravitational fields (Plotnitsky, 2014).
6. Including these additional factors in the HI equation can improve its accuracy in certain scenarios. However, it's important to note that adding more variables can also make the equation more complex and harder to solve, so it's always a trade-off between accuracy and simplicity (Kechrimparis, 2017).
Newton's 1. Air resistance. Newton's Law (NL) assumes that there is no air resistance, but in reality air law resistance can play a significant role in the movement of objects, especially those moving at high speed (Lingefjä rd, 2022).
2. Friction: NL assumes that surfaces are perfectly smooth and frictionless, but in reality there is always some degree of friction between surfaces that can affect the movement of objects.
Elasticity: NL assumes that all collisions are perfectly elastic, which means that no energy is lost in the collision. In fact, many collisions are inelastic, which means that some of the energy is lost as heat or sound (Mayhew, 2020).
3. Gravity: NL includes the effects of gravity, but it does not take into account the effects of relativistic gravity or the gravitational forces of objects with extreme mass or density (Schubert, 2011).
4. Quantum effects: NL is incompatible with quantum mechanics, which describes the behavior of particles at the atomic and subatomic level. In some cases, quantum effects can play a significant role in the motion of objects (Rabinowitz, 2007).
5. Electromagnetic fields: NL does not take into account the influence of electromagnetic fields, which can have a significant impact on the movement of charged particles (Pinheiro, 2011).
The value of εmod4 /εopt4 is much higher than 1 (≈25.2), which confirms the assumption that scientists have not yet considered the hidden effects (potential physical relationships between variables) affecting the expansion of the universe. Although Hubble's law expresses the fundamental relationship between receding velocity and distance, the relationship between receding velocity and redshift depends on the accepted cosmological model. In addition, the use of the model with GoPSI ≡ LT when measuring H0 is not recommended because the achievement of comparative uncertainty, in theory and practice, cannot be realized . The experimental numerical value of the Hubble constant is determined by the model and the measurement process that implements it, based on the subjective assessment of scientists. Therefore, researchers' confidence in considering all possible sources of uncertainty does not guarantee the achievement of the true H0. The informational approach can serve as a tool for clarifying the admissibility of one subjective estimate of the magnitude of uncertainties in the calculation of the Hubble constant.
Hubble's law is a fundamental concept in cosmology that describes the relationship between the distance and recession velocity of galaxies. While Hubble's law has provided a useful framework for understanding the expansion of the universe, there are several ways in which scientists could work to improve its predictive power.
Here are a few potential directions: 1. Improved distance measurements: Hubble's law relies on accurate measurements of the distances to galaxies. Historically, these measurements have been challenging to make, leading to uncertainties in the values of the Hubble constant (the proportionality constant in Hubble's law). By developing new techniques for measuring distances, such as improved parallax measurements or more precise standard candles, scientists could reduce these uncertainties and improve the predictive power of Hubble's law (Riess, 2022).
2. Better velocity measurements: Similarly, the accuracy of Hubble's law depends on the accuracy of velocity measurements for galaxies. By developing new methods for measuring galaxy velocities, such as improved spectroscopic techniques or more precise redshift measurements, scientists could reduce uncertainties in the Hubble constant and improve the predictive power of Hubble's law (Riess, 2022).
3. Accounting for other effects: Hubble's law assumes a simple linear relationship between distance and velocity, but there may be other factors that affect this relationship. For example, the presence of dark matter or the effects of large-scale structures in the universe could influence the observed velocities of galaxies. By developing models that account for these effects, scientists could refine the predictions of Hubble's law and better understand the underlying physics of the universe (Kamionkowski, 2023).
4. Incorporating additional data: Hubble's law is based on observations of relatively nearby galaxies, but there is a wealth of data available from more distant objects, such as supernovae or cosmic microwave background radiation. By incorporating these additional data sets into their analyses, scientists could refine the predictions of Hubble's law and better understand the evolution of the universe (Hu, 2023).
The practical value of the Stefan-Boltzmann law is confirmed by the very small value of the total relative uncertainty (1.76· 10 -11 ) obtained from the calculation of all the uncertainties of the variables included in the model  and a significant closeness of the ratio εmod3/εopt3 to 1 (≈ 0.58, 1. Surface emissivity: The Stefan-Boltzmann law assumes that the surface of an object is a perfect black body that emits and absorbs radiation at all wavelengths equally. However, real-world surfaces are not perfect black bodies, and their emissivity (the ratio of the radiation emitted by a surface to the radiation emitted by an ideal black body at the same temperature) can vary considerably depending on the material and surface condition (Muzika, 2023).
2. Surface temperature distribution: The Stefan-Boltzmann law assumes that the surface of an object has a constant temperature. However, in many cases the surface temperature can be non-uniform, which can affect the amount of energy emitted (Henry, 2019).
3. Reflection and absorption: Stefan-Boltzmann's law assumes that all radiation emitted by an object is absorbed by the environment. However, in reality, some of the radiation may be reflected back to the surface or absorbed by nearby objects, which may affect the amount of energy emitted (Bimonte, 2016).
4. Geometry and shape: Stefan-Boltzmann's law assumes that an object is a flat surface with a uniform temperature. However, in reality, objects can have complex geometries and shapes, which can affect the amount of energy emitted (Garcıa-Esteban, 2021).

Ambient Temperature and Radiation:
The Stefan-Boltzmann law suggests that an object radiates into a vacuum at absolute zero temperature. However, in many real-world scenarios, an object radiates into the environment with non-zero temperature and radiation, which can affect the amount of energy emitted (Wellons, 2007).
Thus, the main goal of the FIQ-based method is to identify a set of model structure (GoP and the number of variables taken into account) that are more preferable, in order to guide scientists to choose those models whose comparative uncertainty is closer to the theoretically justified one.

Discussion
After presenting the results, you are in a position to evaluate and interpret their implications. The concept of "uncertainty" is widely used in scientific research. It is applicable both to the measuring device and to the model of the phenomenon under study, in accordance with which the measurement process is carried out. Uncertainty characterizes the quality and value of the information extracted from the model and accuracy of the experiment. Since any physical or technological process may involve a large number of variables considered by the researcher, the significance of the exact value of the uncertainty becomes important.
The significance of obtaining limits on the accuracy of the model itself is that, by constructing a model with optimal (or close to it) comparative uncertainty, the developed theory opens up new opportunities for scientists to confirm the importance and maintain an independent role in physics. Equation (3), as a limiting relation, may satisfy the intuitive expectations of physics.
A model is a mathematical structure with a defined GoP and a set of variables, which, from the perspective of the thinker, must confirm his assumptions and conclusions, as well as the assumptions underlying it. It can be argued that the model is an accurate description of a scientist's thinking. The use of the information method makes it possible to reduce the importance of phenomenology and conjectures when constructing a theory of a phenomenon or process, which is important for future model developers when the underlying cause-and-effect relationships are unknown or complex.
The ability of the information method to fully describe the modeling process that precedes the experiment and measurement, apparently, is not a shortcoming of this method, but a logical necessity for assessing the initial unremovable uncertainty of the model.
The stated provisions of the FIQ approach differ sharply from the principles of quantum mechanics (QM) and ideas embodied in classical physics (CP). During the implementation of the act of observation, a perturbation is introduced into the QM by means of a measuring instrument using an electromagnetic field (light and electromagnetic waves), which interferes with what is being observed. It is generally accepted that uncertainty is built into the nature of quantum systems. In the CP, the actual measurement is independent of the constructed model. In the information method, the object of study is the amount of information contained in the model, which depends on the philosophical view of the thinker. In both QM and CF, the structure of the model of the phenomenon under study is not considered a source of uncertainty and is outside the scope of their study.
The use of the information method in the analysis of the modeling process poses a new barrier for researchers in their attempts to achieve higher measurement accuracy (Menin 5 , 2020). This limit is much rougher, much larger than the limit dictated by Heisenberg's inequality. No statistical methods have been developed for processing experimental data, and super-powerful computers and unique test benches will not help scientists overcome the limit formulated by Equation 3. This is due to the fact that modelers, whether they want it or not, already at the modeling stage use an intangible tool -a model, a channel with limited bandwidth of information transfer from the object under study to the thinker. The model contained a finite amount of information. This statement applies both to classical mechanics and quantum mechanics or the theory of relativity, which uses, each in its own field, certain models. Moreover, the "blurring" of the phenomenon under study is always present if the number of FICs differs from the optimal number.
The ε-equation is an analogue of the Heisenberg expression, which is characteristic of the modeling process. Moreover, the limits of application of this equation extend not only to the case of using two variables (Wendl et al, 2022), but also to phenomena that include a large number of model variables. Ultimately, the ε-equation highlights a situation in which the accurate prediction of any event is "clouded" by a priori (initial) and unavoidable uncertainty caused by the finiteness of information in the model.

Conclusions
The uncertainty caused by the finiteness of the information in the model is inevitable and is present in the study of any physical phenomenon and technological process. It follows that, in addition to the uncertainty principle in quantum mechanics, there is an additional uncertainty both in the macrocosm and in the microcosm, which is due to the non-material tools used (a system of units, a model with a finite number of variables) and also depends on the will of the thinker.
To calculate the comparative uncertainty, the mathematical apparatus of information theory and concept of complexity of the system of units were used.
The use of comparative uncertainty is expedient to deepen knowledge of the physical world and increase the efficiency of technical projects. So far, for many decades, no efforts have been made to take this uncertainty into account in scientific and technical practice. In this paper, we presented an application of the information method to the analysis of perfection (possible attainable accuracy) of known physical laws.
For a model of any given process, only a finite amount of information is available to describe its state, which is subject to unavoidable uncertainty. It finally yields the principle of finiteness (Sternlieb, 2013). This is not a mistake of either mathematics or physics: it is an inevitable property of model building.
In particular, we presented some potential problems for researchers interested in possible future improvements/corrections of already well known discoveries.
The FIQ-based method for modeling physical phenomena can be applied to a wide range of scientific fields, including physics, chemistry, engineering, and materials science.
One way to use this method in future scientific research is to develop more accurate and detailed models of complex systems using comparative uncertainty. For example, researchers can use this method when analyzing large datasets to uncover new relationships and insights that may not be apparent using traditional analysis methods. In addition, the informational method can be used to optimize the design and operation of technological systems. For example, engineers can use simulation to test various designs and configurations of complex systems, such as aircraft engines or wind turbines, and determine the optimal parameters for these systems.
In general, the FIQ-based method for modeling physical phenomena and technological processes offers a powerful and flexible approach to scientific research, allowing researchers to better understand complex systems, identify new patterns and relationships, and optimize the operation of technological systems.