Mathematical Modelling of COVID-19 and Solving Riemann Hypothesis, Polignac’s and Twin Prime Conjectures Using Novel Fic-Fac Ratio With Manifestations of Chaos-Fractal Phenomena

,


Introduction
Preliminary Notice to Readers: Table of Contents is located at the end of this research paper after Appendix E.
In this paper containing pure and applied mathematics, treatise on relevant Mathematics for Incompletely Predictable Problems required to solve Riemann hypothesis and explain the closely related two types of Gram points is outlined first; and to solve Polignac's and Twin prime conjectures is outlined subsequently. We mention three (arbitrary) spin-offs arising out of solving these intractable open problems in Number theory. Appendix E outline the important Chaos-Fractal perspective on Incompletely Predictable entities [and also providing differentiation between Artificial Intelligence and Natural Intelligence].
COVID-19 is an acronym that stands for Coronavirus Disease 2019 with severity ranging from asymptomatic, mild, moderate, severe to life-threatening with potential to result in chronic residual debilitating symptoms after recovery. It is a proven multi-organ disease generally affecting human lungs to the worst degree. To help ease time constraint of front-line health workers interested in reading this paper, its content is mindfully composed to succinctly include selected materials relevant to COVID-19 pandemic which was officially declared by World Health Organization (WHO) on March 11, 2020. Caused by highly contagious and moderately virulent SARS-CoV-2 [originating from Wuhan, China in December 2019]; this deadly pandemic has resulted in unprecedented negative global impacts from health and economic crisis with numerous deaths and widespread job losses. Similar to most respiratory virus, spread of infection could occur through contact (direct or indirect), > 5 µm size droplet spray in short-range transmission, < 5 µm size aerosol in long-range transmission (airborne transmission). International cooperation to effectively combat the pandemic is required with China and US playing crucial roles aided by other big and small countries alike such as Russia, Canada, Great Britain, Germany, Australia, New Zealand, Vietnam and Thailand.
We devote the initial few pages of this paper to highlight importance of mathematics in understanding infectious disease outbreak. S IR model in Figure 1 and S EIR model in Figure 2 are epidemiological (compartmental) models commonly  used by mathematicians to compute theoretical number of people inflicted with an infectious illness such as COVID-19 in a closed population over time. Respectively, they consist of three and four compartments derived from: S for Susceptible Population, E for Exposed Population, I for Infectious Population, and R for Recovered Population [including deceased &/or immune individuals]. Both models utilize (deterministic) ordinary differential equations. Aspects of modelling this pandemic in terms of our derived Fic-Fac Ratio are loosely and intuitively perceived as "Incompletely Predictable" -a term also used when solving the [unconnected] above-mentioned open problems in Number theory.

Factitious versus Fictitious and the novel & versatile Fic-Fac Ratio:
The adjective factitious derives from factus and therefore facere means to correctly make or utilize something based on (true) fact whereas fictitious derives from fictus and therefore fingere means to incorrectly make or utilize something based on (false) fiction. We predominantly refer the "something" here to mathematical arguments (MA) and diagnostic tests (DT). These two adjectives with their given meanings are used to help create Fic-Fac Ratio, which is an acronym that stands for Fictitious-Factitious Ratio. DT 'Accuracy' refers to ability of that test to distinguish between patients with disease, and those without. Roughly considered as 'Inverse Accuracy' [with higher Accuracy corresponding to lower Fic-Fac Ratio and vice versa], we advocate this Ratio be universally applicable to all well-defined mathematical models.
With or without a "pseudo-component" (respectively) equating to '<100% accuracy' or '100% accuracy', we usefully categorize all synthesized mathematical models to be broadly associated with either "proposed states" such as Riemann hypothesis or "natural states" such as COVID-19 pandemic. During mathematical modelling of Riemann hypothesis, less accurate inequation [as opposed to more accurate equation] is the relevant pseudo-component as it contains Pseudo-(all fractional exponents) -see Subsection 1.4 below. During epidemiological modelling of COVID-19 pandemic, less accurate "Pseudo-S IR" model [as opposed to more accurate S EIR model] is the relevant pseudo-component as it does not contain compartment E for Exposed Population. Modelling concepts from open problems, COVID-19 and its resulting pandemic using derived Fic-Fac Ratio [regarded as tertiary spin-offs from solving our mentioned open problems] are outlined next whereby we provide concrete examples of ideal gold standard MA and ideal gold standard DT with their associated MA and DT results corresponding to Fic-Fac Ratio = 0.

Fic-Fac Ratio for Open Problems, COVID-19 and Its Resulting Pandemic
Abbreviations: MA = mathematical arguments, DT = diagnostic tests, P = Probability (or Proportion), R = Fic-Fac Ratio. We supply definitions, equations and schematic diagram of Fic-Fac Ratio (Figure 3) depicting important inter-relationships for Fic-Fac Ratio which are applicable to MA and DT. Required MA giving [abstract] positive and [abstract] negative MA results in a specified conjecture or hypothesis must be implemented to, respectively, fully confirm a "proposed state" to be correctly valid and correctly not invalid. Required DT giving positive and negative DT results in a specified subject group or population must be implemented that, respectively, aim to fully support a "natural state" to correctly occur and correctly not occur.
Based on 2x2 contingency table in Table 1, both MA and DT have parameters forming "stable properties" and "frequencydependent properties" as depicted below. Fic-Fac Ratio (range: 0 -∞) is roughly 'Inverse Accuracy' since it varies in opposite direction to that for Accuracy (range: 0 -1).

Two stable properties:
Sensitivity (Sen) = a/(a+c); Specificity (Spec) = d/(b+d) Four frequency-dependent properties: Positive predictive value (+ve Pred value) = a/(a+b); Negative predictive value (-ve Pred value) = d/(c+d); Accuracy (Accu) = (a+d)/(a+b+c+d); Prevalence (Prev) = (a+c)/(a+b+c+d) Using Bayes' theorem, +ve Predictive values can also be calculated as Note: For a well-defined "proposed state" or "natural state", P(Fic) and P(Fac) may each be constituted by ≥1 MA or ≥1 DT that are mutually independent and/or dependent. Using parameter R (Equation 2), Equation 1 is equivalent to two parametric equations P(Fic) = R R+1 & P(Fac) = 1 R+1 with R+1 0 & R 0. In S EIR model, extra compartment E for Exposed Population allows modelling to incorporate incubation period. This is time from exposure to causative agent until first symptoms develop and is characteristic for each disease agent. WHO estimated in early 2020 the incubation period for COVID-19 ranges from 1 to 14 days with a median incubation period of 5 to 6 days. One useful way to determine the infectiousness of COVID-19 is the reproductive rate of its causative agent SARS-CoV-2, or R 0 . R 0 measures the average number of secondary infections caused by a single case and is initially estimated by WHO to be 1.4 -2.5 (average 1.95). Higher in countries that do not implement strong public measures such as extensive [and repeated] testing, contact tracing, case isolation and contact quarantine; R 0 is a context specific measurement which will fall to < 1 with successful control of outbreaks. Another [less useful] measure of infectiousness is household secondary attack rate, or the proportion of household members who are likely to get infected from a case. Estimates of this rate have not unexpectedly varied significantly between studies in 2020 [not quoted here], ranging from as low as 3 -10% to as high as 100% for COVID-19. This suggests that there may be factors that vary considerably between different groups, such as types of activities, duration of event, ventilation of the household and viral shedding of the case. All the above estimates can be subsequently refined as more data becomes available.
Applying Artificial Intelligence technology to contact tracing has been demonstrated to provide markedly improved efficiency for this important process. We now give four concrete examples utilizing Fic-Fac Ratio (R) whereby their corresponding false positive and false negative MA and DT results do not exist and consequently from Table 1 with (a+d) = 1, (b+c) = 0, and R = 0. For optimal understanding, we discuss [hypothetical] test subject group on MA and patient group on DT with total number of each group and its two subgroups denoted (respectively) by N T =100 and N 1 =N 2 =50.
Obtaining MA results for a hypothesis or conjecture using ideal gold standard MA to rigorously prove: (III) Conjecture "Ubiquitous human angiotensin-converting enzyme 2 (ACE2) receptor is sole entry receptor for SARS-CoV-2 causing COVID-19 when susceptible test subjects N T = 100 are [unethically] experimentally exposed to this virus with assumed 100% infectivity rate in ideal world (but likely, say, up to around 59% infectivity rate (Ing, Cocks & Green, 2020) in real world)" to be true via (i) COVID-19 infection will occur in test subjects N 1 = 50 exposed to SARS-CoV-2 while not taking novel drug 'irreversible ACE2 blocker' with 100% efficacy and acceptable "safety profile" [true positive MA result] and (ii) COVID-19 infection will not occur in test subjects N 2 = 50 exposed to SARS-CoV-2 while taking this The gene that encodes Transmembrane Serine Protease 2 (TMPRSS2) is activated when male hormones bind to androgen receptor. It can be experimentally shown that TMPRSS2 enzyme (Hoffmann et al, 2020) is required to cleave SARS-CoV-2's spike protein -a process known as proteolytic priming -before the virus could enter cells via its spike protein binding to ACE2 receptor. Pharmacologically targeting (e.g.) ACE2 could theoretically be key to unlocking effective vaccines based on (e.g.) mRNA & DNA nucleic acid, weakened or inactivated viral forms, protein subunits and viral vectors; and effective drugs (e.g.) antiviral medication Remdesivir [by inhibiting viral replication thus shortening time to clinical recovery], 'androgen deprivation therapy', 'irreversible ACE2 blocker' and 'TMPRSS2 inhibitor'. Another hypothetical novel drug 'floating version of ACE2' could trick the virus to preferably bind with this drug rather than ACE2 on human cells thus potentially treating COVID-19 infection, preventing viral replication and spread.
With main effect of increasing vasoconstricting angiotensin II hormone, ACE acts as a key regulatory peptide in reninangiotensin-aldosterone system (RAAS); and with main effect of decreasing vasoconstricting angiotensin II hormone, its counterpart ACE2 acts as key counterregulatory peptide via its dual actions of firstly, acting as an ubiquitous functional receptor present in many parts of our body and secondly, simultaneously acting as an enzyme that predominantly degrade angiotensin II (and to a lesser extent cleaves angiotensin I and participates in hydrolysis of other peptides). In patients with RAAS blockade such as on ACE inhibitor (ACEI) or angiotensin II receptor blocker (ARB) therapy for hypertension or diabetes, health workers are dealing here with a double-edged sword depending on the phase of disease. Increased baseline ACE2 expression in these patients could potentially increase SARS-CoV-2 infectivity and ACEI/ARB use would be an addressable risk factor. Conversely, once infected, downregulation of ACE2 may be the hallmark of COVID-19 progression. Consequently, upregulation by preferentially employing RAAS blockade and ACE2 replacement in acute respiratory distress syndrome phase may turn out to be beneficial.
"Proposed states" such as modelling Riemann hypothesis when formulated as equation or inequation [with Pseudo-(all fractional exponents)] can and must be error-free. All "proposed states" can and must have their Fic-Fac Ratio = 0 with P(Fac) = 1 and P(Fic) = 0. This is equivalent to stating mathematical-based proofs for "proposed states" must always be mathematically rigorous and error-free.
Loosely speaking, "natural states" such as Pseudo-S IR model or S EIR model for COVID-19 pandemic are "Incompletely Predictable" in the sense that their statistical-based proofs should be statistically significant but they can never be errorfree. [Here, we will omit outlining common ordinary differential equations associated with the two models.] This is because both models as schematically displayed will (1) intrinsically be affected by obtained DT results using relevant DT e.g. never having, in practice, 100% accuracy and (2) extrinsically be affected by incorrect DT results obtained due to [unintentional] e.g. sampling errors (likely causing false negative DT results in COVID-19 patients potentially due to obtained saliva samples being insufficient, collected too early during infection or too late during recovery), observational errors, blunders, under-and over-reporting or [intentional] e.g. data fabrication and manipulation. We give an extreme "counter-example" of data fabrication and manipulation: Having ulterior motive, local investigator Mr. CB decided to intentionally send an e-mail containing (say) important test results at (say) 3:45 PM Friday February 8, 2019 to a fabricated email address XYZ. Consequently, these results will never reach the intended recipient (statistician / epidemiologist) for analysis. Note: Medico-legally in terms of Fic-Fac Ratio, (i) XYZ is ['positively'] a fabricated email address for recipient when used by Mr. CB since it never belong to recipient = (abstract) True Positive MA and (ii) XYZ used by Mr. CB is ['negatively'] a non-existing email address for recipient since it was never created by recipient = (abstract) True Negative MA. This unjustifiable action will lead to failure of these results to be properly incorporated into modelling an "old" epidemic occurring from (say) October 29, 2018 to February 8, 2019. Both (1) and (2) will lead to some quantifiable increase of P(Fic) values [with reciprocal decrease of P(Fac) values] affecting, for instance, I for Infectious Population. Since we reject [or accept] probability based Fic-Fac Ratio > 1 [or < 1], the overall goal is to always minimize P(Fic) &/or maximize P(Fac).
Gold standard MA must always be an (error-free) ideal gold standard MA. Gold standard DT refers to its use in achieving a definitive diagnosis obtained by biopsy, surgery, autopsy, long-term follow-up or another acknowledged standard. In theory, an ideal gold standard DT designed to detect SARS-CoV-2 is error-free having Sensitivity = 100% (it identifies all individuals with the disease) and Specificity = 100% (it does not falsely identify individuals without the disease); and consequently will also have +ve Predictive values, -ve Predictive values, and Accuracy all = 100%. In practice, there are no ideal gold standard DT, and one tries to use a DT that is as close as possible to the ideal test. The commonly available reverse transcription-polymerase chain reaction (PCR) test on a nasal (oro/nasopharyngeal) swab detects presence of genetic material of SARS-CoV-2 causing COVID-19. Results on Sensitivity and Specificity of this newly developed test depend critically on how closely it approaches the ideal test. It likely has intrinsic Sensitivity & Specificity in the range of (say) 90 -95%. Then assuming a high Sensitivity & Specificity of 95% meant that the test could still miss about 5% of infected people and falsely diagnose about 5% of non-infected people. If required, whole genome sequencing can additionally be performed on selected positive reverse transcription-PCR samples to detect phylogenetic clusters of SARS-CoV-2 and rapidly identify SARS-CoV-2 transmission chains. Notwithstanding potential for some false-positive test results perhaps due to people previously exposed to other less dangerous coronaviruses, IgG anti-coronavirus antibodies could be used to detect past COVID-19 infection and measure community immunity. Future development of potential tests using different methodology may be based on detecting viral components such as proteins, nucleic acids or combinations of these in patient samples.
In a study of all 217 passengers and crew on a cruise ship (Ing et al, 2020), 128 tested positive for COVID-19 on reverse transcription-PCR (59%). Of these infected patients, 19% (24) were symptomatic; 6.2% (8) required medical evacuation; 3.1% (4) were intubated and ventilated; and the mortality was 0.8% (1). The majority of infected patients were asymptomatic (81%, 104 patients). Thus prevalence of COVID-19 on affected [isolated] cruise ships [and tentatively projected by us to happen in some " hotspot" outbreak places on planet Earth] is likely to be significantly underestimated.
Remark 1.1. Difference mitigation measures with full compliance by everyone could make to severity of COVID-19 pandemic can be clearly illustrated by epidemiological modelling in Figure 4 courtesy of Centers for Disease Control and Prevention (CDC) [with arising mental health problems being an addressable issue].
Dynamic staged implementation and subsequent staged easing of [beneficial] mitigation measures such as lockdowns, border closures, social distancing (with practising good hand and sneeze / cough hygiene; obeying more than 1.5 -2 metres distance between people; using Personal Protective Equipment (PPE), importantly, in the correct manner when deemed appropriate to do so by authorized health officials for public and health-care settings e.g. eye protection which included visors or face shields or goggles, among others, and three layered homemade cloth face masks or surgical masks or P2 / N95 respirators (Chu et al, 2020); and limiting indoor / outdoor mass gatherings is based on experiences, expert opinions, statistical analysis of collected data or previous and recent research studies thus complying with Evidencebased Medicine (EBM) and Practice (EBP Ability of a test to discriminate between normal (without disease) and abnormal (with disease) individuals is described by its Specificity and Sensitivity. Generally, they are inversely related to each other and may be altered by changing reference interval or normal range. In other words, one can only be improved at the expense of the other. Example, prostate When a DT has Sensitivity of 95% (5% false -ve) and Specificity of 95% (5% false +ve), for a disease with 1% Prevalence, its +ve Predictive value is only 16% but its -ve Predictive value is 99%. Relationship between Prevalence and +ve Predictive value with Sensitivity of 95% is numerically and graphically depicted in Table 2 and Figure 5.
Lymphocytes include natural killer cells which function in cell-mediated, cytotoxic innate immunity; T cells for cellmediated, cytotoxic adaptive immunity; and B cells for humoral, antibody-driven adaptive immunity (which is mostly mediated by differentiated B cells called plasma cells secreting Immunoglobulins G, A, M, D and E). Memory B cells are a B cell sub-type that are formed within germinal centers following primary infection. Memory B cells can survive for decades and repeatedly generate an accelerated and robust antibody-mediated immune response in the case of re-infection (also known as a secondary immune response). Memory T cells are a subset of T lymphocytes that might have some of the same functions as memory B cells e.g. Antigen-specific memory T cells against viruses or other microbial molecules can be found in both T C M and T E M subsets. T V M subset also function in production of various cytokines. Then for a COVID-19 vaccine candidate targeting sufficient level of immunity against SARS-CoV-2 spike protein, it must generate the appropriate type of antibody and T cell response.  We introduce the educational concept of 'Top-Down Approach' versus 'Bottom-Up Approach' to therapy on COVID-19 induced 'cytokine storm' causing hyper-inflammation. Assuming the simplistic but not totally accurate caveat expressed through the following statement to be true: 'Cytokine storm' is largely caused by imbalance of two broad classes of identifiable cytokines known as pro-inflammatory cytokines and anti-inflammatory cytokines whereby there is supramaximal elevation of the former class with or without supramaximal fall of the later class. Then giving Dexamethasone acting through the non-specific (increased) anti-inflammatory effect [likely via acting non-specifically on various cytokines] constitutes 'Top-Down Approach' to therapy whereby giving novel synthetic drugs 'pro-inflammatory cytokine X blocker' and/or 'anti-inflammatory cytokine Y' acting through, respectively, their specific (reduced) pro-inflammatory effect and (increased) anti-inflammatory effect constitutes 'Bottom-Up Approach' to therapy. Finally, we opine that only globally available safe and effective COVID-19 vaccine(s) when successfully developed can ultimately control COVID-19 pandemic. This is achieved by providing mass immunization targeting sufficient herd immunity threshold [= 1 -1 R 0 and estimated to be around 60 -70%] at community level to prevent on-going transmission of this infection.

Open Problems From Riemann Zeta Function and Sieve of Eratosthenes
Dirichlet Sigma-Power Laws are continuous format version of discrete format Riemann zeta function (or its proxy Dirichlet eta function). Sieve of Eratosthenes is a simple ancient algorithm for finding all prime numbers up to any given limit by iteratively marking as composite (i.e., not prime) the multiples of each prime, starting with first prime number 2. Multiples of a given prime are generated as a sequence of numbers starting from that prime, with constant difference between them equal to that prime. Dimension (2x -N) [see Section 8 "Information-Complexity conservation" for more details] dependently incorporate prime and composite numbers (and Number '1') whereas Sieve of Eratosthenes directly and indirectly give rise to prime and composite numbers (but not Number '1'). In using the unique Dimension (2x -N) system with N = 2x -ΣPC x -Gap, Dimension (2x -N) when fully expanded is numerically just equal to ΣPC x -Gap since Dimension (2x -N) = 2x -2x + ΣPC x -Gap = ΣPC x -Gap. In order to solve Riemann hypothesis, Polignac's and Twin prime conjectures (and explain two types of Gram points); one could in principle use Path A or Path B option in Table 3. Our chosen Path B requires Mathematics for Incompletely Predictable Problems.
Elements of three complete sets constituted by nontrivial zeros and two types of Gram points together with elements of two complete sets constituted by prime and composite numbers are all classified as Incompletely Predictable entities. Riemann hypothesis (1859) proposed all nontrivial zeros in Riemann zeta function to be located on its critical line. Defined as Incompletely Predictable problem is essential in obtaining the continuous format version of [discrete format] Riemann zeta function dubbed Dirichlet Sigma-Power Law to prove this hypothesis. All of infinite magnitude, nontrivial zeros when geometrically depicted as corresponding Origin intercepts together with two types of Gram points when geometrically depicted as corresponding x-& y-axes intercepts explicitly confirm they intrinsically form relevant component of pointintersections in this function. Defined as Incompletely Predictable problems is essential for these explanations to be correct. Involving proposals that prime gaps and associated sets of prime numbers are infinite in magnitude, Twin prime conjecture (1846) deals with even prime gap 2 thus forming a subset of Polignac's conjecture (1849) which deals with all even prime gaps 2, 4, 6, 8, 10,.... Defined as Incompletely Predictable problems is essential to prove these conjectures using our unique Dimension (2x -N) system instead of Sieve of Eratosthenes. Thus our innovative Information-complexity conservation computed as Information-based complexity constitutes an unique all-purpose [quantitative and qualitative] analytic tool associated with Mathematics for Incompletely Predictable problems. We say these problems can literally be perceived as "complex systems" containing well-defined Incompletely Predictable entities such as nontrivial zeros and two types of Gram points in Riemann zeta function (or its proxy Dirichlet eta function) together with prime and composite numbers from Sieve of Eratosthenes.
Remark 1.2. Mathematics for Incompletely Predictable Problems equates to sine qua non defining problems involving Incompletely Predictable entities to be Incompletely Predictable problems achieved by incorporating certain identifiable mathematical steps with this procedure ultimately enabling us to rigorously prove or explain open problems in Number theory as primary spin-offs.
Obtained parallel observations: Just as there is conservation or preservation of (quantitative) " net area value" happening at appropriate times for continuous format Riemann zeta function (aka Dirichlet Sigma-Power Law); similar conservation or preservation of (quantitative) "net number value" will happen at appropriate times for natural numbers on one hand and prime numbers, composite numbers and Number '1' [and even and odd numbers] on the other hand when Information-Complexity conservation is enforced in both scenario. This concept will be equally applicable to prime numbers, composite numbers and Number '1' [and even and odd numbers] when depicted using Dimension (2x -N). Then qualitatively, (maximal) Information-Complexity conservation for "complex system" Riemann zeta function equates to maximal three axes-intercepts occurring only when σ = 1 2 and with minimal two axes-intercepts occurring when σ 1 2 ; and maximal Information-Complexity conservation for "complex system" Dimension (2x -N) on the constituents of natural numbers equates to N = 7 being baseline maximal viz. maximal [varying] Complexity occurring only for prime-composite number pairing and with N = 4 being baseline minimal viz. minimal Complexity occurring for even-odd number pairing.

Refined information on Incompletely Predictable entities of Gram and virtual Gram points:
These entities all of infinite magnitude are dependently calculated using complex equation Riemann zeta function, ζ(s), or its proxy Dirichlet eta function, η(s), in critical strip (denoted by 0 < σ < 1) thus forming the relevant component of point-intersections. In Figure 7 Refined information on Incompletely Predictable entities of prime and composite numbers: These entities all of infinite magnitude are dependently computed (respectively) directly and indirectly using complex algorithm Sieve of Eratosthenes. Denote C to be uncountable complex numbers, R to be uncountable real numbers, Q to be countable rational numbers or roots [of non-zero polynomials], R-Q to be uncountable irrational numbers, A to be countable algebraic numbers, R-A to be uncountable transcendental numbers, Z to be countable integers, W to be countable whole numbers, N to be countable natural numbers, E to be countable even numbers, O to be countable odd numbers, P to be countable prime numbers, and C to be countable composite numbers. A are C (including R) that are countable rational or irrational roots.
Cardinality of a given set: With increasing size, arbitrary Set X can be countable finite set (CFS), countable infinite set (CIS) or uncountable infinite set (UIS). Cardinality of Set X, |X|, measures "number of elements" in Set X. E.g. Set negative Gram[y=0] point has CFS of negative Gram[y=0] point with |negative Gram[y=0] point| = 1, Set even P has CFS of even P with |even P| = 1, Set N has CIS of N with |N| = ℵ 0 , and Set R has UIS of R with |R| = c (cardinality of the continuum).
Formal definitions for Completely Predictable (CP) entities and Incompletely Predictable (IP) entities: In this paper, the word "number" [singular noun] or "numbers" [plural noun] in reference to prime & composite numbers, nontrivial zeros & two types of Gram points can interchangeably be replaced with the word "entity" [singular noun] or "entities" [plural noun]. Respectively, an IP (CP) number is locationally defined as a number whose position is dependently (independently) determined by complex (simple) calculations using complex (simple) equation or algorithm with (without) needing to know related positions of all preceding numbers in neighborhood. Simple properties are inferred from a sentence such as "This simple equation or algorithm by itself will intrinsically incorporate actual location [and actual positions] of all CP numbers". Solving CP problems with simple properties amendable to simple treatments using usual mathematical tools such as Calculus result in 'Simple Elementary Fundamental Laws'-based solutions. Complex properties, or "meta-properties", are inferred from a sentence such as "This complex equation or algorithm by itself will intrinsically incorporate actual location [but not actual positions] of all IP numbers". Solving IP problems with complex properties amendable to complex treatments using unusual mathematical tools such as Dimension (2x -N) system, exact and inexact Dimensional analysis homogeneity as well as using usual mathematical tools such as Calculus result in 'Complex Elementary Fundamental Laws'-based solutions.  ' (or Gram[y=0] points) are x-axis intercepts with choice of index 'n' for 'Gram points' historically chosen such that first 'Gram point' [by convention at n = 0] corresponds to the t value which is larger than (first) nontrivial zero located at t = 14.134725. 'Gram points' -see Appendix A for more details -are IP entities constituted by CIS of R-A [rounded off to six decimal places] with the first six given at n = -3, t = 0; at n = -2, t = 3.436218; at n = -1, t = 9.666908; at n = 0, t = 17.845599; at n = 1, t = 23.170282; at n = 2, t = 27.670182. We will not calculate any values for Gram[x=0] points. Denoted by parameter t; nontrivial zeros, 'Gram points ' and Gram[x=0] points all belong to well-defined CIS of R-A which will twice obey the relevant location definition [in CIS of R-A themselves and in CIS of numerical digits after decimal point of each R-A]. First and only negative 'Gram point' (at n = -3) is obtained by substituting CP t = 0 resulting in ζ( 1 2 + ıt) = ζ( 1 2 ) = -1.4603545, a R-A number [rounded off to seven decimal places] calculated as a limit similar to limit for Euler-Mascheroni constant or Euler gamma with its precise (1 st ) position only determined by computing positions of all preceding (nil) 'Gram point' in this case. '0' and '1' are special numbers being neither P nor C as they represent nothingness (zero) and wholeness (one). In this setting, the ideas of (i) having factors for '0' and '1', or (ii) treating '0' and '1' as CP or IP numbers, is meaningless. All entities derived from well-defined simple/complex algorithms or equations are 'dual numbers" as they can be simultaneously depicted as CP and IP numbers.

Algebraic Number Theory Versus Analytic Number Theory
Set P ⊂ Set Z ⊂ Set Q. Gaussian rationals, and Gaussian integers are complex numbers whose real and imaginary parts are (respectively) both rational numbers, and integer numbers. Gaussian primes are Gaussian integers z = a + bi satisfying one of the following properties. 1. If both a and b are nonzero, then a+bi is a Gaussian prime iff a 2 + b 2 is an ordinary prime [whereby iff is the written abbreviation for 'if and only if']. 2. If a = 0, then bi is a Gaussian prime iff |b| is an ordinary prime and |b| = 3 (mod 4). 3. If b = 0, then a is a Gaussian prime iff |a| is an ordinary prime and |a| = 3 (mod 4).
Algebraic number theory is loosely defined to deal with new number systems involving Completely Predictable or Incompletely Predictable entities such as even & odd numbers, prime & composite numbers, p-adic numbers, Gaussian primes, Gaussian rationals & integers, and complex numbers. A p-adic number is an extension of the field of rationals such that congruences modulo powers of a fixed prime number p are related to proximity in so-called "p-adic metric". The extension is achieved by an alternative interpretation of concept of "closeness" or absolute value viz. p-adic numbers are considered to be close when their difference is divisible by a high power of p: the higher the power, the closer they are. This property enables p-adic numbers to encode congruence information in a way that turns out to have powerful applications in number theory including, for example, attacking certain Diophantine equations and in famous proof of Fermat's Last Theorem by English mathematician Sir Andrew John Wiles in 1995.
Analytic number theory is loosely defined to deal with functions of a complex variable such as Riemann zeta function [containing nontrivial zeros and two types of Gram points] and other L-functions. Study of prime numbers, complex numbers and π being braided together in a pleasing trio is usefully visualized to be located at intersection of this two main branches of number theory. We separate our relatively elementary proof for Riemann hypothesis and relatively elementary explanations for two types of Gram points to belong to Analytic number theory, and our relatively elementary proofs for Polignac's and Twin prime conjectures [expectedly associated with paucity of functions involving a complex variable] to belong to Algebraic number theory.
Secondary spin-offs from solving Riemann hypothesis are often stated as "With this one solution, we have proven five hundred theorems or more at once". This apply to many important theorems in Number theory (mostly on prime numbers) that rely on properties of Riemann zeta function such as where trivial and nontrivial zeros are / are not located. A classical example is resulting absolute and full delineation of prime number theorem which relates to prime counting function. This function, usually denoted by π(x), is defined as the number of prime numbers ≤ x. Public-key cryptography that is widely required for financial security in E-Commerce traditionally depend on solving difficult problem of factoring prime numbers for astronomically large numbers. The intrinsic "Incompletely Predictable" property present in prime numbers, composite numbers, nontrivial zeros and two types of Gram points can never be altered to "Completely Predictable" property. For this stated reason, it is a mathematical impossibility that providing rigorous proofs such as for Riemann hypothesis will, in principle, ever result in crypto-apocalypse. However, utilizing parallel computing (more than seriel computing), fast supercomputers and far-more-powerful quantum computers would theoretically allow solving difficult factorization problem in quick time, resulting in less secure encryption and decryption. Then using, for instance, quantum cryptography that relies on principles of quantum mechanics to encrypt and transmit data in a way that cannot be hacked will combat this issue.
Remark 1.3. Confirming first 10,000,000,000,000 nontrivial zeros location on critical line implies but does not prove Riemann hypothesis to be true.
Locations of first 10,000,000,000,000 nontrivial zeros on critical line have previously been computed to be correct. Hardy (Hardy, 1914), and with Littlewood (Hardy & Littlewood, 1921), showed infinite nontrivial zeros on critical line by considering moments of certain functions related to ζ(s). This discovery cannot constitute rigorous proof for Riemann hypothesis because they have not exclude theoretical existence of nontrivial zeros located away from this line. Dimensional analysis (DA) is an analytic tool with DA homogeneity and non-homogeneity (respectively) denoting valid and invalid equation occurring when 'units of measurements' for 'base quantities' are "balanced" and "unbalanced" across both sides of the equation. E.g. equation 2 m + 3 m = 5 m is valid and equation 2 m + 3 kg = 5 'm·kg' is invalid (respectively) manifesting DA homogeneity and non-homogeneity.

Exact and Inexact Dimensional Analysis Homogeneity for Equations and Inequations
Remark 1.4. We can validly apply exact and inexact Dimensional analysis homogeneity to well-defined equations and inequations.

Footnote 1, 2: Exact & inexact DA homogeneity occur in Dirichlet Sigma-Power Laws as equations or inequations for Gram[y=0] points, Gram[x=0] points & nontrivial zeros.
Law of Continuity is a heuristic principle whatever succeed for the finite, also succeed for the infinite. These Laws which inherently manifest themselves on finite & infinite time scale should "succeed for the finite, also succeed for the infinite".
Outline of proof for Riemann hypothesis. {Validity in using the inequations with their Pseudo-(all fractional exponents) given by 2(σ + 1) instead of their [actual] (all fractional exponents) given by (σ + 2) is allowed since the absolute difference between the two terms is simply the constant σ. For equations, their [actual] (all fractional exponents) given by 2(1 -σ) meant that the absolute difference between this term and the Pseudo-(all fractional exponents) given by 2(σ + 1) or [actual] (all fractional exponents) given by (σ + 2) for inequations is, respectively, simply the constant 4σ or 3σ, thus lending further support to this validity.} To simultaneously satisfy two mutually inclusive conditions: I. With rigid manifestation of exact DA homogeneity, Set nontrivial zeros with |nontrivial zeros| = ℵ 0 is located on critical line (viz. σ = 1 2 ) when 2(1 − σ) [or 2(σ + 1)] as (all fractional exponents) = whole number '1' [or Pseudo-(all fractional exponents) = whole number '3'] in Dirichlet Sigma-Power Law 3 as equation [or inequation]. II. With rigid manifestation of inexact DA homogeneity, Set nontrivial zeros with |nontrivial zeros| = ℵ 0 is not located on non-critical lines (viz.  Riemann hypothesis mathematical foot-prints. Six identifiable steps to prove Riemann hypothesis: Step 1 Use η(s), proxy for ζ(s), in critical strip.
Step 3 Obtain "simplified" Dirichlet eta function which intrinsically incorporates actual location [but not actual positions] of all nontrivial zeros 4 .
Step 4 Apply Riemann integral to "simplified" Dirichlet eta function in discrete (summation) format.
Step 5 Obtain Dirichlet Sigma-Power Law in continuous (integral) format as equation or inequation.
Step 6 Confirm exact or inexact DA homogeneity for (all fractional exponents) and Pseudo-(all fractional exponents).

Riemann Zeta and Dirichlet Eta Functions
An L-function consists of a Dirichlet series with a functional equation and an Euler product. Examples of L-functions come from modular forms, elliptic curves, number fields, and Dirichlet characters, as well as more generally from automorphic forms, algebraic varieties, and Artin representations. They form an integrated component of 'L-functions and Modular Forms Database' (LMFDB) with far-reaching implications. In perspective, ζ(s) is the simplest example of an L-function. It is a function of complex variable s (= σ ± ıt) that analytically continues sum of infinite series ζ(s) = ∞ n=1 1 n s = 1 1 s + 1 2 s + 1 3 s + · · ·. The common convention is to write s as σ + ıt with ı = √ −1, and with σ and t real. Valid for Also known as alternating zeta function, η(s) must act as proxy for ζ(s) in critical strip (viz. 0 < σ < 1) containing critical line (viz. σ = 1 2 ) because ζ(s) only converges when σ > 1. This implies ζ(s) is undefined to left of this region in critical strip which then requires η(s) representation instead. They are related to each other as ζ(s) = γ · η(s) with proportionality (1) is defined for only 1 < σ < ∞ region where ζ(s) is absolutely convergent with no zeros located here. In Eq.
(1), equivalent Euler product formula with product over prime numbers [instead of summation over natural numbers] also represents ζ(s) =⇒ all prime numbers are (intrinsically) "encoded" in ζ(s). This observation alone represents a strong reason to conveniently combine proofs for Riemann hypothesis, Polignac's and Twin prime conjectures in our [one] paper.
Euler formula is commonly stated as e ıx = cos x + ı · sin x. Euler identity (where x = π) is e ıπ = cos π + ı · sin π = −1 + 0 [or stated as e ıπ + 1 = 0]. The n s of ζ(s) is expanded to n s = n (σ+ıt) = n σ e t ln(n)·ı since n t = e t ln(n) . Apply Euler formula to n s result in n s = n σ (cos(t ln(n)) + ı · sin(t ln(n)). This is written in trigonometric form [designated by short-hand notation n s (Euler)] whereby n σ is modulus and t ln(n) is polar angle (argument).
Riemann hypothesis proposed all nontrivial zeros to be located on critical line. This location is conjectured to be uniquely associated with presence of exact DA homogeneity in derived equation & inequation of Dirichlet Sigma-Power Law with Eq. (4) intrinsically incorporated into this Law as η(s) = 0 definition for nontrivial zeros equates to Eq. (4).
When depicted in terms of Eq. (4), Eq. (5) Eq. (6) in discrete (summation) format is a non-Hybrid integer sequence equation -see Appendix C. η(s) calculations for all σ values result in infinitely many non-Hybrid integer sequence equations for 0<σ<1 critical strip region of interest with n = 1, 2, 3, 4, 5,. . . , ∞ as discrete integer number values, or n = 1 to ∞ as continuous real numbers values with Riemann integral application. These equations will geometrically represent entire plane of critical strip, thus (at least) allowing our proposed proof to be of a complete nature.
Finally, Eq. (6) being the "simplified" Dirichlet eta function derived directly from η(s) will intrinsically incorporate actual location [but not actual positions] of all nontrivial zeros. The proof is now complete for Lemma 3.12.
Proposition 3.2. Dirichlet Sigma-Power Law in continuous (integral) format given as equation and inequation can both be derived directly from "simplified" Dirichlet eta function in discrete (summation) format with Riemann integral application.
[Note: Dirichlet Sigma-Power Law in continuous (integral) format refers to end-product obtained from "first key step of converting Riemann zeta function into its continuous format version".] Proof. In Calculus, integration is reverse process of differentiation viewed geometrically as numerical "total area value" solution enclosed by curve of function and x-axis. Apply definite integral I between limits (or points) a and b is to compute Then Dirichlet Sigma-Power Law will also fullfil this criterion. Due to resemblance to power law functions in σ from s = σ + ıt being exponent of a power function n σ , logarithm scale use, and harmonic ζ(s) series connection in Zipf's law; we elect to call this Law by its given name. A characteristic and crucial part of this Law is its exact formula expression in usual mathematical language [y = f (x 1 , x 2 ) format description for a 2-variable function with (2n) and (2n − 1) as 'base quantities'] consist of y = f (t, σ) with discrete n = 1, 2, 3, 4, 5,. . . , ∞ or continuous n = 1 to ∞; -∞<t<+∞; and 0<σ<1.
A proper integral is a definite integral which has neither limit a or b infinite and from which the integrand does not approach infinity at any point in the range of integration. Only a proper integral will have its [solitary] combined +ve (above x-axis) and -ve (below x-axis) non-zero numerical "total area value" solution successfully computed from applying Riemann integral. An improper integral is a definite integral that has either or both limits a and b infinite or an integrand that approaches infinity at one or more points in the range of integration.
The resulting Dirichlet Sigma-Power Law, being improper integral (with lower limit a = 1 and upper limit b = ∞) obtained from [validly] applying Riemann integral to "simplified" Dirichlet eta function, will [expectedly] have its [multiple] +ve (above x-axis) minus -ve (below x-axis) numerical "net area value" solutions successfully computed -see Propositions 3.3 and 3.4 below. All relevant antiderivatives in this paper are derived from improper integrals with format ∞ 1 f (n)dn based on Eqs. (6), (17) & (19). Example for Eq. (6), involved improper integrals are from These improper integrals are seen to involve [periodic] sine function between limits 1 and ∞. Each improper integral can be validly expanded as n=∞−1 f (n)dn which, for all sufficiently large t as t−→ ∞, will manifest divergence by oscillation (viz. for all sufficiently large t as t−→ ∞, this cummulative total will not diverge in a particular direction to a solitary well-defined limit value such as sin π/2 = 1 or less well-defined limit value such as +∞).
With steps of manual integration shown using indefinite integrals [for simplicity], we solve definite integral based on numerator portion of R1 with (2n) parameter in Eq. (6): We deduce most other important integrals to be "variations" of this particular integral containing (i) deletion of (2n) −σ , √ 2 or 3 4 π terms, and/or (ii) interchange of sine and cosine function. We check all derived antiderivatives to be correct using computer algebra system Maxima.
Simplifying and applying linearity, we obtain 2 Now solving e (1−σ)u t sin (u) du. We integrate by parts twice in a row: fg = fg − f g.
As integral e (1−σ)u t sin (u) du appears again on Right Hand Side, we solve for it: Undo substitution u = t ln (2n) + 3π 4 and simplifying: By rewriting and simplifying, Denominator portion of R1 with (2n − 1) parameter in Eq. (6), Eq. (7) equates to Dirichlet Sigma-Power Law as equation derived from Eq. (6) is given by: Apply Ratio Study to Eq. (6) -see Appendix B. This involves [intentional] incorrect but "balanced" rearrangement of terms in Eq. (6) giving rise to Eq. (10) which is a non-Hybrid integer sequence inequation. Left-hand side contains 'cyclical' sine function in first term (Ratio R1) and 'non-cyclical' power function in second term (Ratio R2).
Proof. Preliminary discussion on using three types of symmetry for a given function: (1) symmetry about the vertical y-axis ["function is even"] e.g. cosine, arccos (2) symmetry about the origin ["function is odd"] e.g. sine, arcsin, tangent, arctan and (3) in all other cases ["function is neither even nor odd"]. Even function has its Cummulative Total areas symmetrical about the vertical axis and odd function has its Cummulative Total areas symmetrical about the origin (with conservation or preservation of areas derived from [opposite side] numerical "net area value" always equal to zero in both cases).
We classify our antiderivatives below using these functions with their basic properties such as sum [or difference] of two even (odd) functions is even (odd); sum [or difference] of an even and odd function is neither even nor odd (unless one of the functions is equal to zero over the given domain); product [or quotient] of two even or odd functions is an even function; and product [or quotient] of an even function and an odd function is an odd function. We will shortly see that only Dirichlet Sigma-Power Laws as equation and inequation pertaining to calculations intended for Gram[x=0,y=0] points will uniquely manifest "functions that are neither even nor odd".

Rigorous Proof for Riemann Hypothesis Summarized as Theorem Riemann I -IV
. For 0 < σ < 1, then 2 < 2(σ + 1) < 4. The only whole number between 2 & 4 is '3' which coincide with σ = 1 2 . When 0 < σ < 1 2 & 1 2 < σ < 1, then [correspondingly] 2<2(σ + 1)<3 & 3<2(σ + 1)<4. Legend: R = all real numbers. For 0 < σ < 1, σ consist of 0 < R < 1. For 0 < 2(1 − σ) < 2 and 2 < 2(σ + 1) < 4, 2(1 − σ) and 2(σ + 1) must (respectively) consist of 0 < R < 2 and 2 < R < 4. An important caveat is that previously used phrases such as ' (all fractional exponents) = whole number '1' / fractional number ' 1' [or Pseudo-(all fractional exponents) = whole number '3' / fractional number ' 3']", although not incorrect per se, should respectively be replaced by ' (all real exponents) = whole number '1' / real number ' 1' [or Pseudo-(all real exponents) = whole number '3' / real number ' 3']" for complete accurracy. We apply this caveat to Theorem Riemann I -IV. Proof. Since s = σ ± ıt, complete set of nontrivial zeros which is defined by η(s) = 0 is exclusively associated with one (and only one) particular η(σ ± ıt) = 0 value solution, and by default one (and only one) particular σ [conjecturally] = 1 2 value solution. When performing exact DA homogeneity on Dirichlet Sigma-Power Law as equation and inequation [with both containing de novo property for "actual location of all nontrivial zeros"], the phrase "If real number exponent σ has exclusively 1 2 value, only then will exact DA homogeneity be satisfied" implies one (and only one) possible mathematical solution. Theorem Riemann III reflect Theorem Riemann II on presence of exact DA homogeneity for σ = 1 2 in Dirichlet Sigma-Power Law as equation and inequation. This Law has identical σ variable as that referred to by Riemann hypothesis [whereby σ here uniquely refer to critical line]. The proof for Theorem Riemann III is now complete as it independently refers to simultaneous association of confirmed (i) solitary σ = 1 2 value in Dirichlet Sigma-Power Law as equation and inequation satisfying exact DA homogeneity and (ii) critical line defined by solitary σ = 1 2 value being the "actual location [but with no request to determine actual positions]" of all nontrivial zeros as proposed in original Riemann hypothesis2. Theorem Riemann IV. Condition 1. All σ 1 2 values (non-critical lines), viz. 0 < σ < 1 2 and 1 2 < σ < 1 values, exclusively does not contain " actual location of all nontrivial zeros" [manifesting de novo inexact DA homogeneity in equation and inequation], together with Condition 2. One (& only one) σ = 1 2 value (critical line) exclusively contains "actual location of all nontrivial zeros" [manifesting de novo exact DA homogeneity in equation and inequation], confirm Riemann hypothesis to be true when these two mutually inclusive conditions are met. Proof. Condition 2 Theorem Riemann IV simply reflect proof from Theorem Riemann III [incorporating Proposition 3.3] for "actual location of all nontrivial zeros" exclusively on critical line manifesting de novo exact DA homogeneity (all real number exponents) = real number '1' for equation [or Pseudo-(all real number exponents) = real number '3' for inequation]. The proof for Condition 2 Theorem Riemann IV is now complete2. Corollary 3.4 confirms de novo inexact DA homogeneity manifested as (all real number exponents) = real number ' 1' for equation [or Pseudo-(all real number exponents) = real number ' 3' for inequation] by all σ 1 2 values (non-critical lines) that are exclusively not associated with "actual location of all nontrivial zeros". Applying inclusion-exclusion principle: Exclusive presence of nontrivial zeros on critical line for Condition 2 Theorem Riemann IV implies exclusive absence of nontrivial zeros on non-critical lines for Condition 1 Theorem Riemann IV. The proof for Condition 1 Theorem Riemann IV is now complete2.
We logically deduce that explicit mathematical explanation why presence & absence of nontrivial zeros 6 should (respectively) coincide precisely with σ = 1 2 & σ 1 2 [literally the Completely Predictable meta-properties ('overall' complex properties)] requires "complex" mathematical arguments. Attempting to provide explicit mathematical explanation with "simple" mathematical arguments would intuitively mean nontrivial zeros have to be (incorrectly & impossibly) treated as Completely Predictable entities.

Prime and Composite Numbers
Prime & Composite numbers are Incompletely Predictable entities dependently linked together in a sequential, cummulative & eternal manner since relationship Number '1' + Prime numbers + Composite numbers = Natural numbers holds for all Natural numbers.

Dimensional Analysis on Cardinality and "Dimensions" for Prime Numbers
We use the word "Dimensions" to denote well-defined Incompletely Predictable entities obtained from using our unique Dimension (2x -N) system. Relevant "Dimensions" dependently represent Number '1', P and C. Then by default any (sub)sets of P and C in well-defined equations can also be represented by their corresponding "Dimensions".
Remark 6.1. We can apply Dimensional analysis to "Dimensions" from Information-Complexity conservation and cardinality of relevant sets in certain well-defined equations.
Let "Dimensions" and different (sub)sets of E, O, N, P and C be 'base quantities'. Then exponent '1' of "Dimensions" and cardinality of these (sub)sets in well-defined equations are corresponding 'units of measurement'. Performing DA on "Dimensions" for PC pairing is depicted later on. Performing DA on cardinality is depicted next.
Step 2 Considering i ∈ E, confirm perpetual recurrences of individual E prime gap = i (associated with its unique odd P i ) occur only when depicted as specific groupings of Dimension (2x -N) 1 now endowed with exponent '1' for all ranges of x.
Step 3 Perform DA on exponent '1' in these Dimensions.
Step 4 Perform DA on equation Set odd P = ∞ i=2 Subset odd P i to obtain |odd P| = |odd P i | = ℵ 0 whereby Subset odd P i is derived from its associated unique E prime gap = i with |E prime gaps| = ℵ 0 .
Step 5 Confirm 'Prime number' variable and 'Prime gap' variable complex algorithm "containing" all P with knowing their overall actual location [but not actual positions] 8 .
Step 6   .... Legend: maximal prime gaps is depicted with asterisk symbol (*) and non-maximal prime gaps is depicted without asterisk symbol.

Brief Overview of Polignac's and Twin Prime Conjectures
Occurring over 2000 years ago (c. 300 BC), ancient Euclid's proof on infinitude of P in totality [viz. |P| = ℵ 0 for Set P] predominantly by reductio ad absurdum (proof by contradiction) is earliest known but not the only proof for this simple problem in Number theory. Since then dozens of proofs have been devised such as three chronologically listed: Goldbach's Proof using Fermat numbers (written in a letter to Swiss mathematician Leonhard Euler, July 1730), Furstenberg's Topological Proof in 1955(Furstenberg, 1955, and Filip Saidak's Proof in 2006(Saidak, 2006. The strangest candidate is likely to be Furstenberg's Topological Proof. In 2013, Yitang Zhang proved a landmark result showing some unknown even number 'N' < 70 million such that there are infinitely many pairs of P that differ by 'N' (Zhang, 2014). By optimizing Zhang's bound, subsequent Polymath Project collaborative efforts using a new refinement of GPY sieve in 2013 lowered 'N' to 246; and assuming Elliott-Halberstam conjecture and its generalized form have further lower 'N' to 12 and 6, respectively. Then 'N' has intuitively more than one valid values such that there are infinitely many pairs of P that differ by each of those 'N' values [thus proving existence of more than one Subset odd P i with |odd P i | = ℵ 0 ]. We can only theoretically lower 'N' to 2 (in regards to P with 'small gaps') but there are still an infinite number of E prime gaps (in regards to P with 'large gaps') that require "the proof that each will generate its unique set of infinite P". Remark 6.2. Existence of maximal and non-maximal prime gaps supply crucial indirect evidence to intuitively support but does not prove "Each even prime gap will generate an infinite magnitude of odd prime numbers on its own accord".
Comments relevant to Remark 6.2 are given in the next section below.

Supportive Role of Maximal and Non-Maximal Prime Gaps
We analyze data of all P obtained when extrapolated out over a wide range of x ≥ 2 integer values. As sequence of P carries on, P with ever larger prime gaps appears. For given range of x integer values, prime gap = n 2 is a 'maximal prime gap' if prime gap = n 1 < prime gap = n 2 for all n 1 < n 2 . In other words, largest such prime gaps in this range are called maximal prime gaps. The term 'first occurrence prime gaps' refers to first occurrences of maximal prime gaps whereby maximal prime gaps are prime gaps of "at least of this length". We use maximal prime gaps to denote 'first occurrence prime gaps'. CIS non-maximal prime gaps (endorsed with nickname 'slow jumpers') always lag behind CIS maximal prime gaps for onset appearances in P sequence. These are shown for first 17 prime gaps in Table 4. Apart from O prime gap = 1 representing solitary even P '2', remaining P in Table 4 consist of representative single odd P for each E prime gap. These odd P individually make one-off appearance in P sequence in a perpetual albeit Incompletely Predictable manner. Initial seven of [majority] "missing" odd P are 5, 11, 13, 17, 19, 29, 31,... belonging to Subset P with 'residual' prime gaps are potential source of odd P in relation to proposal that each E prime gap from Set E prime gaps will generate its specific Subset odd P. Set all P from all prime gaps = Subset P from maximal prime gaps + Subset P from non-maximal prime gaps + Subset P from 'residual' prime gaps. Subset P from 'residual' prime gaps with representation from all E prime gaps includes all correctly selected "missing" odd P. These observations support but does not prove the proposition that each E prime gap will generate its own Subset odd P with |odd P| = ℵ 0 . For i ∈ N; primordial P i # is analog of usual factorial for P = 2, 3, 5, 7, 11, 13,.... Then P 1 # = 2, P 2 # = 2 X 3 = 6, P 3 # = 2 X 3 X 5 = 30, P 4 # = 2 X 3 X 5 X 7 = 210, P 5 # = 2 X 3 X 5 X 7 X 11 = 2310, P 6 # = 2 X 3 X 5 X 7 X 11 X 13 = 30030, etc. English mathematician John Horton Conway coined the term 'jumping champion' in 1993. An integer n is a ' jumping champion' if n is the most frequently occurring difference (prime gap) between consecutive P<x for some x integer values. Example: for any x with 7<x<131, n = 2 (indicating twin P) is the 'jumping champion'. It has been conjectured that (i) the only 'jumping champions' are 1, 4 and primorials 2, 6, 30, 210, 2310, 30030,... and (ii) 'jumping champions' tend to infinity. Their required proofs will likely need proof of k-tuple conjecture. P from 'jumping champion' prime gaps have their onset appearances in P sequence in a perpetual albeit Incompletely Predictable manner [as another example to that outlined in previous paragraph].

Information-Complexity Conservation
A formula, as equation or algorithm, is simply a Black Box generating necessary Output (with qualitative-like structural 'Complexity') when supplied with given Input (with quantitative-like data 'Information'). This Information-based complexity are literally what is referred to in ' Information-Complexity conservation'. P and C numbers are traditionally " analyzed separately". The key definition behind Dimension (2x -N) is used to abstractly represent dependent P and C numbers (and Number '1') in a combined manner whereby N = 2x -ΣPC x -Gap. This will lead to required mathematical arguments based on Information-Complexity conservation and patterns in Gap 1, 2, 3,..., +∞. Let x be from Set X such that x ∈ N. Consider x for upper boundary of interest in Set X whereby X is chosen from N, E, O, P or C.
Lemma 8.1. Natural counting function N-π(x), defined as |N ≤ x|, is Completely Predictable by independently using simple algorithm to be equal to x.
Proof Formula to generate N with 100% certainty is N i = i whereby N i is the i th N and i = 1, 2, 3,..., ∞. For a given N i , its i th position is simply i. Natural gap ( The proof is now complete for Lemma 8.12. Lemma 8.2. Even counting function E-π(x), defined as |E ≤ x|, is Completely Predictable by independently using simple algorithm to be equal to floor(x/2).
Proof. Formula to generate E with 100% certainty is E i = iX2 whereby E i is the i th E and i = 1, 2, 3,..., ∞ abiding to mathematical label "All N always ending with a digit 0, 2, 4, 6 or 8". For a given E i , its i th position is calculated as Thus E-π(x) = |E ≤ x| = floor(x/2). The proof is now complete for Lemma 8.22. Lemma 8.3. Odd counting function O-π(x), defined as |O ≤ x|, is Completely Predictable by independently using simple algorithm to be equal to ceiling(x/2).
Proof. Formula to generate O with 100% certainty is O i = (iX2) -1 whereby O i is the i th odd number and i = 1, 2, 3,..., ∞ abiding to mathematical label "All N always ending with a digit 1, 3, 5, 7, or 9". For a given O i number, its i th position is calculated as i There are x 2 O≤x. Thus O-π(x) = |O≤x| = ceiling(x/2). The proof is now complete for Lemma 8.32.
Lemma 8.4. Prime counting function P-π(x), defined as |P ≤ x|, is Incompletely Predictable with Set P dependently obtained using complex algorithm Sieve of Eratosthenes.
Proof. Algorithm to generate P i whereby P 1 (= 2), P 2 (= 3), P 3 (= 5), P 4 (= 7),..., ∞ with 100% certainty is based on Sieve of Eratosthenes abiding to mathematical label "All N apart from 1 that are evenly divisible by itself and by 1". Although we can check primality of a given O by trial division, we can never determine its position without knowing positions of preceding P. Prime gap (G Pi ) = P i+1 -P i , with G Pi constituted by all E except 1 st G P1 = 3 -2 = 1. P-π(x) = |P ≤ x|. This is Incompletely Predictable and is calculated via mentioned algorithm. Using definition of prime gap, every P [represented here with aid of 'i' notation] is written as P i+1 = P i + G Pi with P 1 = 2. Here i = 1, 2, 3, 4, 5, ..., ∞. The proof is now complete for Lemma 8.42.
Lemma 8.5. Composite counting function C-π(x), defined as |C≤x|, is Incompletely Predictable with Set C derived as Set N-Set P [dependently obtained using complex algorithm Sieve of Eratosthenes]-Number '1'.
Proof. Composite numbers abide to mathematical label "All N apart from 1 that are evenly divisible by numbers other than itself and 1". Algorithm to generate C i whereby C 1 (= 4), C 2 (= 6), C 3 (= 8), C 4 (= 9),..., ∞ with 100% certainty is based [indirectly] on Sieve of Eratosthenes via selecting non-prime N to be C. We define Composite gap G C i as C i+1 -C i with G C i constituted by 1 & 2. C-π(x) = |C ≤ x|. This is Incompletely Predictable and need to be calculated indirectly via the mentioned algorithm. Using definition of composite gap, every C [represented here with aid of 'i' notation] is written G C i with C 1 = 4. Here i = 1, 2, 3, 4, 5, ..., ∞. The proof is now complete for Lemma 8.52. Denote X to be N, E, O, P or C. X-π(x) = |X ≤ x| with x ∈ N. We define and compute entity 'Grand-Total Gaps for X at x' (Grand-Total ΣX x -Gaps).
Proposition 8.6. For any given x ≥ 1 values in Set N, designated Complexity is represented by ΣN x -Gaps = x -N with N = 1.
Proposition 8.7. For any given x ≥ 1 values in constituent Set E and Set O, designated Complexity is represented by ΣEO x -Gaps = 2x -N with N = 4 being baseline minimal.
Proposition 8.8. For selected x ≥ 2 values in constituent Set P and Set C, designated Complexity is cyclically represented by ΣPC x -Gaps = 2x -N with N = 7 being baseline maximal.
Bottom graph in Figure 23 symbolically represent "Dimensions" using ever larger negative integers. Dimensions 2x -7, 2x -8, 2x -9, ..., 2x -∞ are symbolically represented by -7, -8, -9, ..., ∞ with 2x -7 displayed as 'baseline' Dimension whereby Dimension trend (Cumulative Sum Gaps) must repeatedly reset itself onto this 'baseline' Dimension on a perpetual basis. Dimensions represented by ever larger negative integers will correspond to P associated with ever larger prime gaps and this phenomenon will generally happen at ever larger x values (with complete presence of Chaos and Fractals being manifested in our graph). At ever larger x values, P-π(x) will overall become larger but with a decelerating trend whereas C-π(x) will overall become larger but with an accelerating trend. This support ever larger prime gaps appearing at ever larger x values. 2x-9 Legend: C = composite, P = prime, Dim = Dimension, Y = 2x -7 (for visual clarity), N/A = Not Applicable. Table 5 Prime-Composite finite scale mathematical (tabulated) landscape. Data for x = 2 to 64.

Polignac's and Twin Prime Conjectures
Previous section alludes to P-C finite scale mathematical landscape. This section alludes to P-C infinite scale mathematical landscape. Let 'Y' symbolizes (baseline) Dimension 2x -7. Let prime gap at P i = P i+1 -P i with P i & P i+1 respectively symbolizes consecutive "first" & "second" P in any P i -P i+1 pairings. We denote (i) Dimensions YY grouping [depicted by 2x -7 initially appearing twice in (iii)] to represent signal for appearances of P pairings other than twin P such as cousin P, sexy P, etc; (ii) Dimension YYYY grouping to represent signal for appearances of P pairings as twin P; and (iii) Dimension (2x -≥7)-Progressive-Grouping allocated to 2x -7, 2x -7, 2x -8, 2x -9, 2x -10, 2x -11,..., 2x -∞ as elements of precise and proportionate CFS Dimensions representation of an individual P i with its associated prime gap namely, Dimensions 2x -7 & 2x -7 pairing = twin P (with both its prime gap & CFS cardinality = 2); 2x -7, 2x -7, 2x -8 & 2x -9 pairing = cousin P (with both its prime gap & CFS cardinality = 4); 2x -7, 2x -7, 2x -8, 2x -9, 2x -10 & 2x -11 pairing = sexy P (with both its prime gap & CFS cardinality = 6); and so on. The higher order [traditionally defined as closest possible] prime groupings of three P as P triplets, of four P numbers as prime quadruplets, of five P numbers as prime quintuplets, etc consist of serendipitous groupings abiding to mathematical rule: With exception of three 'outlier' P 3, 5, & 7; groupings of any three P as P, P+2, P+4 combination (viz. manifesting two consecutive twin P) is a mathematical impossibility. The 'anomaly' one of every three consecutive O is a multiple of three, and hence this number cannot be P, explains this impossibility. Then closest possible P grouping [viz. for prime triplet] must be either P, P+2, P+6 or P, P+4, P+6 format.
P groupings not respecting traditional closest-possible-prime groupings are also the norm occurring infinitely often, indicating continual presence of prime gaps ≥ 6. As P become sparser at larger range, perpetual presence of (i) prime gaps ≥ 6 [proposed to arbitrarily represent 'large gaps'] and (ii) prime gaps 2 & 4 [proposed to arbitrarily represent 'small gaps'] with progressive greater magnitude will cummulatively occur for each prime gap but always in a decelerating manner.
With permanent requirement at larger range of intermittently resetting to baseline Dimension 2x -7 occurring [either two or] four times in a row, nature seems to dictate, at the very least, perpetual twin P or one other non-twin P occurrences is inevitable.
We dissect Dimension YYYY unique signal for twin P appearances: Initial two CFS Dimensions YY components of YYYY represent "first" P component of twin P pairing. Last two Dimensions YY components of YYYY signifying appearance of "second" P component of twin P pairing is also the initial first-two-element component of full CFS Dimensions representation for "first" P component of following non-twin P pairing. Twin P are uniquely represented by repeating single type Dimension 2x -7. In all other 'higher order' P pairings (with prime gaps ≥ 4), they require multiple types Dimension representation. There is qualitative aspect association of single type Dimension representation for twin P resulting in "less colorful" Plus Gap 2 Composite Number Continuous Law as opposed to multiple types Dimension representation for all other 'higher order' P pairings resulting in "more colorful" Plus-Minus Gap 2 Composite Number Alternating Law. 'Gap 2 Composite Number' occurrences in both Laws on finite scale are (directly) observed in Figure  23 & Table 5 for x = 2 to 64, and on infinite scale are (indirectly) deduced using logical arguments for all x values.
Overall sum total of individual CFS Dimensions required to represent every P is infinite in magnitude as |all P| = ℵ 0 . Standalone Dimensions YY groupings [representing signals for "higher order" non-twin P appearances] &/or as front Dimensions YY (sub)groupings [which by itself is fully representative of twin P as Dimensions YYYY appearances] need to recur on an indefinite basis. Then twin P and "higher order" cousin P, sexy P, etc should aesthetically all be infinite in magnitude because (respectively) they regularly and universally arise as part of Dimension YYYY and Dimension YY appearances. An isolated P is defined as a P such that neither P -2 nor P + 2 is P. In other words, isolated P is not part of a twin P pair. E.g., 23 is an isolated P since 21 & 25 are both C. Repeated inevitable presence of Dimension YY grouping is nothing more than indicating repeated occurrences of isolated P. This constitutes another view on Dimension YY.
CIS of Gap 1 Composite Numbers are fully associated with non-twin P as they eternally occur in between any two consecutive non-twin P. CIS of Gap 2 Composite Numbers are (i) fully associated with twin P as they are eternally present in between any twin P pair, and (ii) partially associated with non-twin P as they are eternally present alternatingly or intermittently in between any two consecutive non-twin P. An inevitable statement in relation to "Gap 2 Composite Numbers pool contribution" based on above reasoning: At the bare minimum, either twin P or at least one of non-twin P must be infinite in magnitude. An inevitable impression: All generated subsets of P from 'small gaps' [of 2 & 4] and 'large gaps' [of ≥ 6] alike should each be CIS thus allowing true uniformity in P distribution. Again we see in Table 2 depicting P-C data for x = 2 to 64 that, for instance, P with prime gap = 6 must also persistently have this 'last-place' Gap 2 Composite Numbers intermittently appearing in certain rhythmic alternating patterns, thus complying with Plus-Minus Gap 2 Composite Number Alternating Law. This CFS Dimensions representation for P with prime gaps = 6 will again generate their infinite share of associated Gap 2 Composite Numbers to contribute to this pool. The presence of this last-place Gap 2 Composite Numbers in various alternating pattern in appearances & non-appearances must self-generatingly be similarly extended in a mathematically consistent fashion ad infinitum to all other remaining infinite number of prime gaps [which are not discussed in details above]. The proof is now complete for Part II of Proposition 9.22.

Rigorous Proofs for the Now-Named as Polignac's and Twin Prime Hypotheses
The proofs on lemmas and propositions from previous section supply all necessary evidences to fully support Theorem Polignac-Twin prime I to IV below thus depicting proofs for Polignac's and Twin prime conjectures in a rigorous manner. Theorem Polignac-Twin prime I. Incompletely Predictable prime numbers P n = 2, 3, 5, 7, 11, ..., ∞ or composite numbers C n = 4, 6, 8, 9, 10, ..., ∞ are CIS with overall actual location [but not actual positions] of all prime or composite numbers accurately represented by complex algorithm involving prime gaps G Pn viz. P n+1 = P n + G Pn or involving composite gaps G C n viz. C n+1 = C n + G C n whereby prime & composite numbers are symbolically represented here with aid of 'n' notation with n = 1, 2, 3, 4, 5, ..., ∞. P 1 = 2 in first algorithm represents the very first (and only even) P. C 1 = 4 in second algorithm represent the very first (and even) C.
Proof. We treat above algorithms as unique mathematical objects looking for key intrinsic properties and behaviors.
Each P or C is assigned a unique prime or composite gap. Absolute number of P or C and (thus) prime or composite gaps are infinite in magnitude. As original formulae containing all P or C by themselves (viz. without supplying prime or composite gaps as "input information" to generate P or C as "output complexity"), these algorithms intrinsically incorporate overall actual location [but not actual positions] of all P or C. The proof is now complete for Theorem Polignac-Twin prime I2.
Proof. Part I of Proposition 9.2 proved all P are represented by Dimension (2x -N) 1 with N ≥ 7 for any given x value (except for x = 2 & 3 values). Although x = 1 is neither P nor C, it is validly represented by Dimension (2x -2) 1 . If each P is endowed with a specific prime gap value, then each such prime gap must [via logical mathematical deduction] be represented by Dimension (2x -N) 1 . Complete argument to support this nominated method of prime gap representation using Dimensions will fully comply with Information-complexity conservation was given in Part II of Proposition 9.2. The preceding mathematical statements are correct as there is a unique prime gap value associated with each P. Proposition 10.1 below based on principles from Set theory provides further supporting materials that prime gaps are infinite in magnitude. The proof is now complete for Theorem Polignac-Twin prime II2.
Proof. This Theorem is stated in greater details as To maintain DA homogeneity, those aforementioned [endowed with exponent 1] Dimensions (2x -N) 1 from Theorem Polignac-Twin prime II must repeat themselves indefinitely in following specific combinations -(i) Dimension (2x -7) 1 only appearing as twin [two-times-in-a-row] and quadruplet [four-timesin-a-row] sequences, and (ii) Dimensions (2x -8) 1 , (2x -9) 1 , (2x -10) 1 , (2x -11) 1 ,..., (2x -∞) 1 appearing as progressive groupings of E 2, 4, 6, 8, 10,..., ∞." To accommodate the only even P '2', exceptions to this DA homogeneity compliance will expectedly occur right at beginning of P sequence -(i) one-off appearance of Dimensions (2x -2) 1 , (2x -4) 1 and (2x -5) 1 and (ii) one-off appearance of Dimension (2x -7) 1 as a quintuplet [five-times-in-a-row] sequence which is equivalent to (eternal) non-appearance of Dimension (2x -6) 1 at x = 4. [We again note Dimension (2x -2) 1 validly represent Number '1' which is neither P nor C.] These sequentially arranged sets are CFS whereby from x = 11 onwards, each set always commence initially as 'baseline' Dimension (2x -7) 1 at x = O values and always end with its last Dimension at x = E values. Each set also have varying cardinality with values derived from all E; and correctly combined sets always intrinsically generate two infinite sets of P and, by default, C in an integrated manner. Our Theorem Polignac-Twin prime III simply represent a mathematical summary derived from Sections 8 & 9 of all expressed characteristics of Dimension (2x -N) 1 when used to represent P with intrinsic display of DA homogeneity. See Proposition 10.2 for more details on DA aspect. The proof is now complete for Theorem Polignac-Twin prime III2.
Theorem Polignac-Twin prime IV. Aspect 1. The "quantitive" aspect to existence of both prime gaps and their associated prime numbers as sets of infinite magnitude will be shown to be correct by utilizing principles from Set theory. Aspect 2.
The "qualitative" aspect to existence of both prime gaps and their associated prime numbers as sets of infinite magnitude will be shown to be correct by ' We analyze P (& C) in terms of (i) measurements based on cardinality of CIS and (ii) pigeonhole principle which states that if n items are put into m containers, with n>m, then at least one container must contain more than one item. We note that ordinality of all infinite P (& C) is "fixed" implying that each one of the infinite well-ordered Dimension sets conforming to CFS type as constituted by Dimensions (2x -7) 1 , (2x -8) 1 , (2x -9) 1 , (2x -10) 1 , (2x -11) 1 , ..., (2x -∞) 1 on respective gaps for P (& C) must also be "fixed".
Proposition 10.1. "Even number prime gaps are infinite in magnitude with each even number prime gap generating odd prime numbers which are again infinite in magnitude" is supported by principles from Set theory and two Laws based on Gap 2 Composite Number.
Proof. We validly exclude even P '2' here. Let (i) cardinality T = ℵ 0 for Set all odd P derived from E prime gaps 2, 4, 6,..., ∞, (ii) cardinality T 2 = ℵ 0 for Subset odd P derived from E prime gap 2, cardinality T 4 = ℵ 0 for Subset odd P derived from E prime gap 4, cardinality T 6 = ℵ 0 for Subset odd P derived from E prime gap 6, etc. Paradoxically, (as sets) T = T 2 + T 4 + T 6 +... + T ∞ equation is valid despite (their cardinality) T = T 2 = T 4 = T 6 =... = T ∞ [with well-ordering principle "stating that every non-empty set of positive integers contains a least element" fulfilled by each (sub)set]; and E prime gaps are 'infinite in magnitude' can justifiably be perceived instead as 'arbitrarily large in magnitude' since cumulative sum total of E prime gaps is relatively much slower to attain the 'infinite in magnitude' status when compared to cumulative sum total of P which rapidly attain this status. But if Subset odd P derived from one or more E prime gap(s) are finite in magnitude, this will breach the ℵ 0 cardinality 'uniformity' resulting in (i) DA non-homogeneity and (ii) inequality (as sets) T > T 2 + T 4 + T 6 +... + T ∞ . In language of pigeonhole principle "stating that if n items are put into m containers with n > m, then at least one container must contain more than one item", residual odd P (still CIS in magnitude) not accounted for by CFS-type E prime gap(s) will have to be [incorrectly] contained in one (or more) of composite gap(s). These arguments using cardinality constitute proof that E prime gaps & odd P generated from each E prime gap, are all CIS. The proof [on "quantitative" aspect] is now complete for Proposition 10.12.
Complete set of P is represented by Dimensions (2x -N) 1 . Table 5 & Figure 23 on PC finite scale mathematical landscape depict perpetual repeating features used in "qualitative" statements supporting (i) Plus-Minus Gap 2 Composite Number Alternating Law (stated as C with composite gaps = 2 present in each of P with prime gaps ≥ 4 situation must be observed to appear as some sort of rhythmic patterns of alternating presence and absence of this type of C), and (ii) Plus Gap 2 Composite Number Continuous Law (stated as C with composite gaps = 2 continual appearances in each of (twin) P with prime gap = 2 situation). Plus-Minus Gap 2 Composite Number Alternating Law has built-in intrinsic mechanism to automatically generate all prime gaps ≥ 4 in a mathematically consistent ad infinitum manner. Plus Gap 2 Composite Number Continuous Law has built-in intrinsic mechanism to automatically generate prime gap = 2 appearances in a mathematically consistent ad infinitum manner. These two Laws refer to end-products obtained from "the second key step of using our unique Dimension (2x -N) system instead of Sieve of Eratosthenes". The proof [on "qualitative" aspect] is now complete for Proposition 10.12.
Proposition 10.2. The presence of Dimensional analysis homogeneity always result in correct and complete set of prime (and composite) numbers.
Each [fixed] finite scale mathematical landscape "page" as part of [fixed] infinite scale mathematical landscape "pages" for P & C display Chaos [sensitivity to initial conditions viz. positions of subsequent P & C are "sensitive" to positions of initial P & C] and Fractals [manifesting fractal dimensions with self-similarity viz. those aforementioned Dimensions for P & C are always present, albeit in non-identical manner, for all ranges of x ≥ 2]. Advocated in another manner, Chaos and Fractals phenomena of those Dimensions for P & C are always present signifying accurate composition of P & C in different [predetermined] finite scale mathematical landscape "(snapshot) pages" for P & C that are self-similar but never identical -and there are an infinite number of these finite scale mathematical landscape "(snapshot) pages". The crucial mathematical step in representing all P (& C) and prime (& composite) gaps with "Dimensions" based on Information-Complexity conservation allows us to obtain the two Laws based on Gap 2 Composite Numbers and perform DA on these entities. The 'strong' principle argument is DA homogeneity equates to complete set of P (& C) whereas DA non-homogeneity does not equate to complete set of P (& C). We also advocate for a 'weak' principle argument supporting DA homogeneity for P (& C) in that nature should not "favor" any particular Dimension(s) to terminate and therefore DA non-homogeneity cannot exist for P (& C). Abiding to an advocated convention that 'conjecture' be termed 'hypothesis' once proven; we now label these conjectures as Polignac's and Twin prime hypotheses.

Conclusions
This original expository paper is advocated to be a novel achievement as we manage to simultaneously model COVID-19 from Medicine as well as solve [unconnected] intractable open problems from Number theory using our versatile Fic-Fac Ratio. In other words, we successfully relate open problems from Number theory when considered as a frontier branch of Mathematics to COVID-19 from Medicine when considered as other science, technology and biology. Transmitted between animals and people, zoonotic virus SARS-CoV-2 which originated from Wuhan, China causing COVID-19 has been clearly shown not to be a laboratory construct or a purposefully manipulated virus (Andersen et al, 2020). Some overall goals of publishing this paper are to promote Mathematics as the 'Universal Language of Science', and foster global cooperation between all nations on planet Earth to effectively combat and better understand the deadly 2020 Coronavirus pandemic. Note: The contextural use of supramaximal elevation or fall of cytokines is based on phenomenon and proposed homeostatic mechanism of supramaximal elevation in B-type natriuretic peptide and its N-terminal fragment levels in anephric patients with heart failure [previously introduced by us in 2012 (Ting & Pussell, 2012)]. This mechanism consists of analyzing the permutations with repetition formula: n r = n 2 from combinatorics involving 'n' individual factors that tend to have non-linear elevating or lowering properties viz. 'r' = 2. Antibody-directed therapy such as convalescent plasma, hyperimmune-globulin and monoclonal antibodies may also play an important role in more rapid control and clearance of SARS-CoV-2.
From our August 12, 2020 14-page paper entitled "Showing role of Angiotensin-converting enzyme 2 in COVID-19 using novel Fic-Fac Ratio" (J. Y. C. Ting) located at URL https://vixra.org/abs/2008.0082 Science Category: Physics of Biology, we also provide a Case Report for medically-oriented readers of a 43 year-old man with acute respiratory distress syndrome (ARDS) on August 28, 2003 from viral pneumonia together with applications from Fic-Fac Ratio to creatively explain COVID-19's drug and vaccine developments, and mitigation measures to combat the resulting pandemic. This patient had initial severe Type 1 Respiratory Failure viz. decreased PaO2 < 60 mmHg (8.0 kPa) with normal or subnormal PaCO2 < 50 mmHg (6.7 kPa) which rapidly deteriorated to severe Type 2 Respiratory Failure viz. decreased PaO2 < 60 mmHg (8.0 kPa) and increased PaCO2 > 50 mmHg (6.7 kPa) requiring intubation and ventilation.
We mathematically envisage two mutually exclusive groups of entities: [totally] Unpredictable entities and [totally] Predictable entities. The first group dubbed Type I entities or Completely Unpredictable entities can arise as [totally] random physical processes in nature e.g. radioactive decay is a stochastic (random) process occurring at level of single atoms. According to Quantum theory, it is impossible to predict when a particular atom will decay regardless of how long the atom has existed. For a collection of atoms, expected decay rate is characterized in terms of their measured decay constants or half-lives. The second group is constituted by two subgroups: dubbed Type II entities or Completely Predictable entities e.g. Even-Odd number pairing in Table 6 (with abbreviation 'Y' = Dimension 2x-4) and dubbed Type III entities or Incompletely Predictable entities e.g. Prime-Composite number pairing in  (2) Prime & composite numbers are [dependently] derived from "Numerical relationship interface" using Sieve of Eratosthenes. Using prime gaps as analogy, there are (for instance) "nontrivial zeros gaps" between any two nontrivial zeros with all these gaps of infinite magnitude being Incompletely Predictable entities. Prime number theorem describes asymptotic distribution of prime numbers among positive integers by formalizing intuitive idea that prime numbers become less common as they become larger through precisely quantifying rate at which this occurs using probability. Secondary spin-off arising out of solving Riemann hypothesis result in absolute and full delineation of prime number theorem. This theorem relates to prime counting function which is usually denoted by π(x) with π(x) = number of prime numbers ≤ x. In other words, solving Riemann hypothesis is instrumental in proving efficacy of techniques that estimate π(x) efficiently. This confirm "best possible" bound for error ("smallest possible" error) of prime number theorem.
In mathematics, logarithmic integral function or integral logarithm li(x) is a special function. Relevant to problems of physics with number theoretic significance, it occurs in prime number theorem as an estimate of π(x) whereby its form is defined so that li(2) = 0; viz. li(x) = x 2 du ln u = li(x) -li (2). There are less accurate ways of estimating π(x) such as conjectured by Gauss and Legendre at end of 18th century. This is approximately x/ln x in the sense lim x→∞ π(x) x/ ln x = 1. Skewes' number is any of several extremely large numbers used by South African mathematician Stanley Skewes as upper bounds for smallest natural number x for which li(x)<π(x). These bounds have since been improved by others: there is a crossing near e 727.95133 but it is not known whether this is the smallest. John Edensor Littlewood who was Skewes' research supervisor proved in 1914 (Littlewood, 1914) that there is such a [first] number; and found that sign of difference π(x) -li(x) changes infinitely often. This refute all prior numerical evidence that seem to suggest li(x) was always > π(x). The key point is [100% accurate] perfect π(x) mathematical tool being "wrapped around" by [less-than-100% accurate] approximate li(x) mathematical tool infinitely often via this 'sign of difference' changes meant that li(x) is the most efficient approximate mathematical tool. Contrast this with "crude" x/ln x approximate mathematical tool where we studied values diverge away from π(x) at increasingly greater rate for larger range of prime numbers.
Using classification system in Appendix C, a formula is either non-Hybrid or Hybrid integer sequence. Inequation with two 'necessary' Ratio (R) or equation with one 'unnecessary' R contains non-Hybrid integer sequence. Equation with one 'necessary' R contains Hybrid integer sequence. "In the limit" Hybrid integer sequence approach unique Position X, it becomes non-Hybrid integer sequence for all Positions ≥ Position X. Kinetic energy (KE) has its endowed units in MJ when m 0 = rest mass in kg and v = velocity in ms −1 . In classical mechanics concerning low velocity with v<<c, Newtonian Obtained from the later by binomial approximation or by taking first two terms of Taylor expansion for reciprocal square root, the former approximates the later well at low speed. We arbitrarily denote inexact DA homogeneity for '<100% accurracy' Newtonian KE and exact DA homogeneity for '100% accurracy' Relativistic KE. "In the limit" Newtonian KE at low speed approach Relativistic KE at high speed, we achieve perfection.
Useful analogy: "In the limit" all three versions of Dirichlet Sigma-Power Laws for Gram[y=0] points, Gram[x=0] points and nontrivial zeros as '<100% accuracy' inequations approach perfection as '100% accuracy' equations, compliance with inexact DA homogeneity becomes compliance with exact DA homogeneity. Note: Absence of fractional exponent (σ+1) as relevant 'unit of measurement' in R1 terms of all inequations giving rise to the so-called Pseudo-(all fractional exponents). Fully understanding the validity of this entity has greatly contributed to designing the extremely useful Fic-Fac Ratio which we regard as tertiary spin-offs from solving our open problems in Number theory. Treated as Incompletely Predictable problems, we gave relatively elementary proof of Riemann hypothesis and explain two types of Gram points by analyzing the "meta-properties" of relevant Dirichlet Sigma-Power Laws viz. We define two terms: perfect symmetry to denote "even functions" [which are symmetric about vertical y-axis] and "odd functions" [which are symmetric about origin]; and broken symmetry to denote "neither even nor odd functions" [which are neither symmetric about vertical y-axis nor origin]. Relevant types of Gram points (at σ = 1 2 ) and virtual Gram points (at σ 1 2 ) represent their corresponding x-axis, y-axis and origin intercepts with two true statements: (1) Dirichlet Sigma-Power Laws pertaining to Gram[x=0,y=0] points (nontrivial zeros) in Riemann hypothesis and virtual Gram[x=0,y=0] points will manifest broken symmetry viz. not satisfying particular symmetry relations present in "even functions" or "odd functions" to combinedly be classified as "neither even nor odd functions" for all their equations and inequations.
The algorithm to compute Z(t) is called Riemann-Siegel formula. Riemann zeta function on critical line, ζ( 1 2 + ıt), will be real when sin(θ(t)) = 0. Positive real values of t where this occurs are called 'Gram points' and can also be described as points where θ(t) π is an integer. Real part of this function on critical line tends to be positive, while imaginary part alternates more regularly between positive & negative values. That means sign of Z(t) must be opposite to that of sine function most of the time, so one would expect nontrivial zeros of Z(t) to alternate with zeros of sine term, i.e. when θ takes on integer multiples of π. This turns out to hold most of the time and is known as Gram's Rule (Law) -a law which is violated infinitely often though. Thus Gram's Law is statement [on the manifested property] that nontrivial zeros of Z(t) alternate with 'Gram points'. 'Gram points' which satisfy Gram's Law are called 'good', while those that do not are called 'bad'. A Gram block is an interval such that its first & last points are good 'Gram points' and all 'Gram points' inside this interval are bad. Counting nontrivial zeros then reduces to counting all 'Gram points' where Gram's Law is satisfied and adding the count of nontrivial zeros inside each Gram block. With this process we need not locate nontrivial zeros but just have to accurately compute Z(t) to show that it changes sign.

B. Ratio Study and Inequations
A mathematical equation, containing ≥ one variables, is a statement that values of two [' left-hand side' (LHS) and 'righthand side' (RHS)] mathematical expressions is related as equality: LHS = RHS; or as inequalities: LHS < RHS, LHS > RHS, LHS ≤ RHS, or LHS ≥ RHS. A ratio is one mathematical expression divided by another. The term 'unnecessary' Ratio (R) for any given equation is explained by two examples: (1) LHS = RHS and with rearrangement, 'unnecessary' R is given by LHS RHS = 1 or RHS LHS = 1; and (2) LHS > RHS and with rearrangement, ' unnecessary' R is given by LHS RHS > 1 or RHS LHS < 1. Consider exponent y ∈ all R values and base x ∈ R≥0 values for mathematical expression x y . Equations such as x 1 = x, x 0 = 1 and 0 y = 0 are all valid. Simultaneously letting both x and y = 0 is an incorrect mathematical action because x y as function of two-variables is not continuous and is undefined at Origin. If we elect to carry out this "balanced" action [equally] on x and y, we obtain (simple) inequation 0 0 1 with associated perpetual obeyance of '=' equality symbol in x y for all applicable R values except when both x and y = 0. The Number '1' value in this inequation is justified by two arguments: I. Limit of x y value as both x and y tend to zero (from right) is 1 [thus fully satisfying criterion "x y is right continuous at the Origin"]; and II. Expression x y is product of x with itself y times [and thus x 0 , the "empty product", should be 1 (no matter what value is given to x)].
Mathematical operator 'summation' obey the law: We can break up a summation across a sum or difference but not across a product or quotient viz, factoring a sum of quotients into a corresponding quotient of sums is an incorrect mathematical action. But if we elect to carry out this action equally on LHS and RHS products or quotients in a suitable equation, we obtain two (unique) 'necessary' R denoted by R1 for LHS and R2 for RHS whereby R1 R2 relationship always hold. We define 'Ratio Study' as intentionally performing this incorrect [but "balanced"] mathematical action on suitable equation [equivalent to one (non-unique) 'unnecessary' R] to obtain its inequation [equivalent to two (unique) 'necessary' R]. We note that performing Ratio Study to obtain inequations involving C does not involve defining a relation between two C. Given Set C is a field (but not an ordered field), it is also not possible to define a relation between two given (z 1 and z 2 ) C as z 1 < z 2 since inequality operation is not compatible with addition and multiplication.

C. Hybrid method of Integer Sequence classification
Let a k (n) denote an arbitrary list of integer sequence whereby k = 1, 2, 3,... and all integer sequence are of infinite length. Consider two integer sequence a 1 (n) and a 2 (n) which are (1) specifically given by their respective type of inequality (or equality) "mathematical operators"; and (2) based on one nominated type of "mathematical function". Integers from a 1 (n) and a 2 (n) are identical to each other except for the interspersed finite number of 'exceptional' terms located in either a 1 (n) or a 2 (n). In other words, this special phenomenon will allow definition for a subsequence with finitely many altered elements known as 'exceptional' terms. The integer sequence having these 'exceptional' terms is Hybrid integer sequence, and the other is its [corresponding] non-Hybrid integer sequence. Then these two unique integer sequences when grouped Thus, this novel classification enables meaningful pairing of two unique integer sequences. Involving the factorial (!) function, this is exampled by our exotic A228186 Hybrid integer sequence (Ting, 2013) with its corresponding A100967 non-Hybrid integer sequence (Noe, 2004). It is currently unclear whether (1) there are more than one existing pair of these extraordinary integer sequences based on ! function, and (2) whether they could involve other mathematical functions apart from ! function. With challenge to discover more, A228186 is the first ever known Hybrid integer sequence which can uniquely and alternatively be synthesized from a "Combinatorics Ratio". For our 'Position i' notation, we let 'i' as belonging to the complete set of natural numbers. We conventionally assign 'n' to denote 'Position i' viz., n = 0, 1, 2, 3, 4, 5,..., ∞. We now succinctly explain below the complete and correct mathematical arguments that rigorously substantiate A228186 and A100967 when grouped together as belonging to "Hybrid method of Integer Sequence classification".
Precisely defined as "Smallest natural number k > n such that (k+n+1)!(k-n-2)! < 2k!(k-1)!" or alternatively defined as "Greatest natural number k > n such that calculated peak values for ratio R = CombinationsWithRepetition CombinationsWithoutRepetition = (k + n − 1)!(n − k)! n!(n − 1)! belong to maximal rational numbers < 2"; A228186 is equal to [infinite length] non-Hybrid (usual Figure 26. The Sierpinski gasket, also called Sierpinski triangle, is a fractal attractive fixed set with overall shape of an equilateral triangle subdivided recursively into smaller equilateral triangles. It displays exact self-similarity approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Useful overview of deterministic [not stochastic] processes: 'Chaos' is [mathematically] synonymous with chaotic nonlinear dynamical systems which are "complex systems" described by discrete or continuous nonlinear (deterministic) equations or algorithms and manifesting the key feature of sensitivity to initial conditions. 'Fractal' is [geometrically] synonymous with fractional geometry which deals with "geometrical objects" (graphs) having fractional (fractal) dimensions and manifesting the key feature of self-similarity. Each unique geometrical objects when deterministically computed from a given Chaos is precisely its Fractal. The mentioned 'complex systems" in this paper contain well-defined Incompletely Predictable entities such as nontrivial zeros and two types of Gram points specified by Riemann zeta function (or its proxy Dirichlet eta function) together with prime and composite numbers specified by Sieve of Eratosthenes. We observe complete presence of Chaos and Fractals phenomena manifested in Figures 10 to 23 that involve the relevant Incompletely Predictable entities whereby these Figures all manifest self-similarity or, more precisely, Quasi-self-similarity. By the same token, we will observe complete absence of Chaos and Fractals phenomena in Figure 24 that involves the Completely Predictable entities of even and odd numbers. Counterintuitively, one could technically consider in a strict geometrical sense that the graphed "straight line" [which is clearly identical at different scales] in Figure 24 containing Completely Predictable entities of even and odd numbers does manifest Chaos and Fractals phenomena that display Exact self-similarity.
The inspiring idiom "Complexity arising from Life at the Edge of Chaos-Fractal" led us to provide a Hierarchical Classification for Elementary-Emergent Fundamental Laws (EEFL). Implied by the definition for 'Fundamental Laws', then EEFL must by default be perfectly applicable to Terrestrial human beings on planet Earth (endowed with advanced civilization) and also Extraterrestrial alien beings on some hypothetical remote planet (endowed with super-advanced civilization). Thus one could also appropriately coin our Fundamental Laws as the Extraterrestrial-Terrestrial EEFL.
In order of increasing complexity, we have the following Laws: