Property:Description

From Encyclopedia of Scientonomy
Jump to navigation Jump to search

This is a property of type Text.

The values of this property are to be stored by converting all instances of CiteRef:: to CITE_. E.g.:

{{#set: Description={{#replace:{{{Description|}}}|CiteRef::|CITE_}}|}}

Make sure to make the opposite replacement when retrieving the value. E.g.

{{#replace: {{#show: {{FULLPAGENAME}}|?Description}}|CITE_|CiteRef::}}

This is done to make it transclusion of descriptions on other pages. When the descriptions were stored as is, a simple {{#show: {{FULLPAGENAME}}|?Description}} would fail to properly display the semantic citations. Instead of a super superscript [1] with bibliographic info, it would show something like (Barseghyan (2015)).

Showing 328 pages using this property.
A
This definition aims to discern between accidental groups, i.e. the ones that don't have a collective intentionality, and actual communities, i.e. groups that do have collective intentionality.  +
By allowing the discussants to suggest alternative formulations in their comments, the workflow incentivizes commenting and aids consensus building. It often happens that the discussants agree that a little tweak in the original formulation would solve the problem. This can help speed up the advancement of our communal knowledge. In contrast, when the discussants are not permitted to reformulate the original formulations, the discussants have no other choice than to write a whole new paper arguing for what is otherwise a little tweak to the original formulation. Not only is this wasteful, but it also creates a bottleneck where a consensus formation is postponed due to bureaucratic restrictions. Thus, it is important to remove this bottleneck and allow the participants to alter original formulations. If a discussion yielded a new formulation, any such formulation should be clearly stated and added to the respective suggested modification, possibly under a separate heading (e.g. by distinguishing “Original Suggestion” from “Final Suggestion”). By default, the new formulation should bear the name of the author(s) of the original suggested modification, unless the original author(s) decides to give credit to those who significantly contributed to the reformulation. This should be decided collegially by the author, the commentators, and the editors on a case-by-case basis.  +
Some facts ''ought'' to be relevant to the [[Theory Acceptance|assessment of a theory]] because the content of the theory itself implies their relevance, and others ought to be relevant simply by definition. When assessing a theory concerning scientific change, relevant facts that ought necessarily to be considered include questions pertinent to scientific change processes. For example: What [[Theory|theories]] and [[Method|methods]] were part of the [[Scientific Mosaic|scientific mosaic]] of the community in question, both before and after the instance of [[Scientific Change|scientific change]]? What modifications were proposed and what parts of the mosaic did they intend to replace? Which of these modifications became accepted into the mosaic, and how? Relevant questions will depend on accepted views about the [[Scope of Scientonomy|scope of scientonomy]]. For example, if scientonomy deals with scientific change [[Scope of Scientonomy - Individual and Social|at the level of scientific communities]], then facts about the accepted views of communities ought to be relevant, and the views of particular individuals ought not. If scientonomy [[Scope of Scientonomy - Construction and Appraisal|deals only with theory appraisal]] and not with theory construction, then it follows that facts concerning the former, but not the latter, ought to be considered. Relevant facts will also depend on the content of the mosaic at the time in question. For example, it is anachronistic to speak of religious constraints on science in the seventeenth century since, at that time, religion and natural philosophy were not regarded as separate domains of knowledge, but as part of the same mosaic.[[CITE_Barseghyan (2015)|p. 111]]  +
The theorem states that the employment of a method is not necessarily simultaneous with the acceptance of a new theory. Being a direct logical consequence of [[The Third Law|the third law]], the theorem highlights the fact that some methods are a result of the implementation of some abstract requirements of other methods. In this way, a new method can be devised as a means of resolving a particular creative gap, and subsequently become employed long after the acceptance of the theory that led to the employment of the abstract method.  +
Barseghyan presents a historical example showing that scientific change is not necessarily a ''synchronous'' process. <blockquote> When it comes to acquiring data about such minute objects as molecules or living cells, the unaided human eye is virtually useless. This proposition yields, among other things, an abstract requirement that, when counting the number of cells, the resulting value is acceptable only if it is obtained with an “aided” eye. This abstract requirement has been implemented in a variety of different ways. First, there is the counting chamber method where the cells are placed in a counting chamber – a microscope slide with a special sink – and the number of cells is counted manually under a microscope. There is also the plating method where the cells are distributed on a plate with a growth medium and each cell gives rise to a single colony. The number of cells is then deduced from the number of colonies. In addition, there is the flow cytometry method where the cells are hit by a laser beam one by one and the number of cells is counted by means of detecting the light reflected by the cells. Finally, there is the spectrophotometry method where the number of cells is obtained by means of measuring the turbidity in a spectrophotometer.[[CITE_Barseghyan (2015)|pp. 151-152]]</blockquote> These are three different implementations of the ''same'' abstract requirement, which were, importantly, all devised and employed at different times.  +
One key corollary of the third law is put forth in Barseghyan (2015). "Scientific change is not necessarily a ''synchronous process'': changes in theories are not necessarily simultaneous with changes in methods".[[CITE_Barseghyan (2015)|pp. 150]] <blockquote>Suppose a new theory becomes accepted and some new abstract constraints become imposed. In this case, we can say that the acceptance of a theory resulted in the employment of a new method and the employment of a new method was synchronous with the acceptance of a new theory. But we also know that there is the second scenario of method employment, where a method implements some abstract requirements of other employed methods. In this scenario, there is a certain creative gap between abstract requirements that follow directly from accepted theories and methods that implement these abstract requirements. Devising a new method that would implement abstract requirements takes a fair amount of ingenuity and, therefore, there are no guarantees that these abstract requirements will be immediately followed by a new concrete method. In short, changes in methods are not necessarily simultaneous with changes in theories.[[CITE_Barseghyan (2015)|pp. 150-151]]</blockquote>  +
If we consider the fact that scientific research is so specialized that no single research lab can account for all accepted theories in their discipline, we quickly recognize that there exists some form of distribution of labour among subcommunities. Authority delegation is an attempt to capture that distribution of labour, in scientonomic terms. What this definition of authority delegation jointly expresses is the acceptance of a theory and the associated employment of a method. In any instance of authority delegation, the delegating community accepts that the community delegated to is an expert in some field. It follows from accepting that expertise that the same delegating community will simply employ a method to accept whatever the expert community says to accept. Importantly, the method employed by the delegating community is distinct from that employed by the community delegated to; it would be misleading to suggest that the delegating community employs the same method as the community delegated to. This definition is careful to capture such particularities, as the definition merely expressed a new theory accepted and method employed by the delegating community. For a simple example, consider a relation of authority delegation between physicists and biologists. A community of physicists can be said to be delegating authority over the life sciences to a community of biologists, so long as the community of physicists ''both'' accepts that biologists are experts in the life sciences ''and'' will accept a theory on the life sciences if told so by the biologists.  +
Authority delegation explained by Gregory Rupik  +
The definition tweaks the [[Authority Delegation (Overgaard-Loiselle-2016)|original definition]] of the term by [[Nicholas Overgaard|Overgaard]] and [[Mirka Loiselle|Loiselle]] to ensure that the relationship of authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
B
Hakob Barseghyan presenting the redrafted ontology  +
There is only one type of agents that can bear a mosaic - community.[[CITE_Barseghyan (2015)|pp. 43-52]] As for ''individual'' epistemic agents, their status and role in the process of scientific change is unclear; thus, the notion of an individual bearing a mosaic is problematic.  +
C
One potential way of addressing the problem of closure mechanism is by introducing a “countdown” mechanism, where the community is given a three-month (90-day) discussion period for commenting on a suggested modification and, if no objections raised during this period, the proposed modification becomes accepted by default. According to Shaw and Barseghyan: <blockquote>This allows for the possibility of inclusive debate without stalling on the development of our theory of scientific change. One disadvantage is that it doesn’t address the worry about masked objections raised in the previous section – people still may not explicitly dissent.[[CITE_Shaw and Barseghyan (2019)|p. 11]]</blockquote>  +
To ensure that a suggested modification is properly evaluated and a verdict is reached, the community should be given a certain time period to discuss the modification, after which a communal vote should be taken. This vote should be offered to all members of the community, who should be given a short timeframe to decide. In principle, this strategy should contribute to the transparency and inclusivity of the workflow by involving larger amounts of the community into the workflow. Since voting doesn't require a great deal of effort, this approach also addresses the problem of lack of commenting. As stated by Shaw and Barseghyan: <blockquote>In a sense, this proposal would look like an election where there are two main phases. In the first phase, arguments will be made but no particular line of action will be taken. In the second phase, the vote will take place and a decision will be made by the will of the people. In addition, this strategy has the advantage of overcoming the problem of masked objections. People can vote anonymously, expressing their view and approval or dissatisfaction with a proposed modification, without fear of any sort of reprisal. One disadvantage is that a vote is not always grounded in good reasons. Community members may ignore important considerations and vote without being informed on the topic, thus leading to a less-than-ideal consensus. As we are witnessing in the world around us, the will of the people does not always pick out the best choice.[[CITE_Shaw and Barseghyan (2019)|p. 11]]</blockquote>  +
A [[Group|group]] that has a collective intentionality.  +
When dealing with a community, it might be useful to analyze it in terms of its constituent subcommunities (e.g. the community of particle physicists within the community of physicists). But such an analysis is based on an assumption that a community can consist of other communities, i.e. subcommunities. This assumption is by no means trivial; indeed, there might exist a view that each community is to be treated separately as one indivisible whole and, thus, any talk of its constituents is meaningless. According to Overgaard, communities can be said to be consisting of other communities.[[CITE_Overgaard (2017)|p. 58]] Thus, there is such a thing as a subcommunity, i.e. a community that is part of a larger community.  +
This definition of ''compatibility'' captures the main gist of the notion as it was originally intended by [[Rory Harder|Harder]] and [[Hakob Barseghyan|Barseghyan]] - the idea that two elements are compatible when they can coexist within the same mosaic.  +
The corollary is meant to restate the content of [[Rory Harder|Harder]]'s [[The Zeroth Law (Harder-2015)|the zeroth law]] of scientific change. Since the corollary follows deductively from the definition of [[Compatibility (Fraser-Sarwar-2018)|''compatibility'']], it highlights that the zeroth law as it was formulated by Harder is tautologous. Since the corollary covers the same idea as the zeroth law, all the theorems that were thought to be deducible by means of the zeroth law (e.g. [[Theory Rejection theorem (Barseghyan-2015)|the theory rejection theorem]] or [[Method Rejection theorem (Barseghyan-2015)|the method rejection theorem]]) can now be considered deducible by means of the corollary.  +
Like [[Demarcation Criteria|demarcation]] and [[Acceptance Criteria|acceptance criteria]], compatibility criteria can be part of a community's employed method. The community employs these criteria to determine whether two theories are mutually compatible or incompatible, i.e. whether they can be simultaneously part of the community's mosaic. Different communities can have different compatibility criteria. While some communities may opt to employ the logical law of noncontradiction as their criterion of compatibility, other communities may be more tolerant towards logical inconsistencies. According to Barseghyan, the fact that these days scientists "often simultaneously accept theories which strictly speaking logically contradict each other is a good indication that the actual criteria of compatibility employed by the scientific community might be quite different from the classical logical law of noncontradiction".[[CITE_Barseghyan (2015)|p. 11]] For example, this is apparent in the case of general relativity vs. quantum physics where both theories are accepted as the best available descriptions of their respective domains (i.e. they are considered ''compatible''), but are known to be in conflict when applied simultaneously to such objects as black holes.  +
Barseghyan presents the following hypothetical-historical example when compatibility criteria are introduced in [[Barseghyan (2015)]]. <blockquote>It can be argued that our contemporary criteria of compatibility have not always been employed. Consider the case of the reconciliation of the Aristotelian natural philosophy and metaphysics with Catholic theology. As soon as most works of Aristotle and its Muslim commentators were translated into Latin (circa 1200), it became obvious that some propositions of Aristotle’s original system were inconsistent with several dogmas of the then-accepted Catholic theology. Take, for instance, the Aristotelian conceptions of determinism, the eternity of the cosmos, and the mortality of the individual soul. Evidently, these conceptions were in direct conflict with the accepted Catholic doctrines of God’s omnipotence and free will, of creation, and of the immortality of the individual human soul.[[CITE_Lindberg (2007)|p. 228–253]]. Moreover, some of the passages of Scripture, when taken literally, appeared to be in conflict with the propositions of the Aristotelian natural philosophy. In particular, Scripture seemed to imply that the Earth is flat (e.g. Daniel 4:10-11; Mathew 4:8; Revelation 7:1), which was in conflict with the Aristotelian view that the Earth is spherical. It is no surprise, therefore, that many of the propositions of the Aristotelian natural philosophy were condemned on several occasions during the 13th century.[[CITE_Lindberg (2007)|p.226-249]]. To resolve the conflict, Albert the Great, Thomas Aquinas and others modified both the Aristotelian natural philosophy and the biblical descriptions of natural phenomena to make them consistent with each other. On the one hand, they stipulated that the laws of the Aristotelian natural philosophy describe the natural course of events only insofar as they do not limit God’s omnipotence, for God can violate any laws if he so desires. Similarly, they modified Aristotle’s determinism by adding that the future of the cosmos is determined by its present only insofar as it is not affected by free will or divine miracles. Similar modifications were introduced to many other Aristotelian propositions. On the other hand, it was also made clear that biblical descriptions of cosmological and physical phenomena are not to be taken literally, for Scripture often employs a simple language in order to be accessible to common folk. Thus, where possible, literal interpretations of Scripture were supposed to be replaced by interpretations based on the Aristotelian natural philosophy.[[CITE_Grant (2004)|p.220-224, 245]] Importantly, it is only after this reconciliation that the modified Aristotelian-medieval natural philosophy became accepted by the community.[[CITE_Lindberg (2007)|p.250-1]] This and similar examples seem to be suggesting that the compatibility criteria employed by the medieval scientific community were quite different from those employed nowadays. While apparently we are inconsistency-tolerant (at least when dealing with theories in empirical science), the medieval scientific community was inconsistency-intolerant in the sense that they wouldn’t tolerate any open inconsistencies in the mosaic.[[CITE_Barseghyan (2015)|p.160-161]]</blockquote>  
Like [[Demarcation Criteria|demarcation]] and [[Acceptance Criteria|acceptance criteria]], compatibility criteria can be part of an epistemic agent's employed method. An epistemic agent employs these criteria to determine whether two elements (e.g. methods, theories, questions) are mutually compatible or incompatible, i.e. whether they can be simultaneously part of the agent's mosaic. In principle, these criteria can be employed to determine the compatibility of elements present in the mosaic, as well as those outside of it (e.g. scientists often think about whether a proposed theory is compatible with the theories actually accepted at the time). [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]] point out that [[Hakob Barseghyan|Barseghyan]]'s [[Compatibility Criteria (Barseghyan-2015)|original definition]] of the term "excludes a simple point that is assumed elsewhere in scientonomy: elements other than theories (i.e. methods and questions) may be compatible or incompatible with other elements (which, again, need not be theories)".[[CITE_Fraser and Sarwar (2018)|p. 72]] To fix this omission, Fraser and Sarwar "suggest that the word ‘theories’ be changed to ‘elements’ to account for the fact that the compatibility criteria apply to theories, methods, and questions alike".[[CITE_Fraser and Sarwar (2018)|p. 72]] Different communities can have different compatibility criteria. While some communities may opt to employ the logical law of noncontradiction as their criterion of compatibility, other communities may be more tolerant towards logical inconsistencies. According to Barseghyan, the fact that these days scientists "often simultaneously accept theories which strictly speaking logically contradict each other is a good indication that the actual criteria of compatibility employed by the scientific community might be quite different from the classical logical law of noncontradiction".[[CITE_Barseghyan (2015)|p. 11]] For example, this is apparent in the case of general relativity vs. quantum physics where both theories are accepted as the best available descriptions of their respective domains (i.e. they are considered ''compatible''), but are known to be in conflict when applied simultaneously to such objects as black holes.  
According to [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]], "[[Compatibility (Fraser-Sarwar-2018)|compatibility]] is a distinct epistemic stance that agents can take towards elements".[[CITE_Fraser and Sarwar (2018)|p.70]] They show this by arguing that it is possible to take the stance of compatibility towards a pair of elements without taking any of the other stances towards these elements. Thus, compatibility is distinct from [[Theory Acceptance|acceptance]], since two elements need not be in the same mosaic, or even accepted by any agent to be considered, in principle, compatible. For example, an epistemic agent may consider Ptolemaic astrology compatible with Aristotelian natural philosophy without accepting either Ptolemaic astrology or Aristotelian natural philosophy. Compatibility is also different from [[Theory Use|use]], since a pair of theories can be considered compatible regardless of whether any of them is considered useful. For instance, one can consider quantum mechanics and evolutionary biology compatible, while finding only the former useful. Finally, compatibility is also distinct from [[Theory Pursuit|pursuit]], since an agent can consider a pair of theories compatible with or without pursuing either. An agent, for instance, may find two alternative quantum theories pursuitworthy while clearly realizing that the two are incompatible.  +
<blockquote>The traditional version of comparativism holds that when two theories are compared it doesn’t make any difference which of the two is currently accepted. In reality, however, the starting point for every theory assessment is the current state of the mosaic. Every new theory is basically an attempt to modify the mosaic by inserting some new elements into the mosaic and, possibly, by removing some old elements from the mosaic. Therefore, what gets decided in actual theory assessment is whether a proposed modification is to be accepted. In other words, we judge two competing theories not in a vacuum, as the traditional version of ''comparativism'' suggests, but only in the context of a specific mosaic. It is this version of the comparativist view that is implicit in the laws of scientific change.[[CITE_Barseghyan (2015)|p. 184]] </blockquote> Theory assessment is an assessment of a proposed modification of the [[Scientific Mosaic|scientific mosaic]] by the [[Method|method]] employed at the time. By [[The First Law|the first law]], a [[Theory|theory]] already in the mosaic is no longer appraised. By [[The Second Law|the second law]], it is only assessed when it first enters the mosaic (see the detailed deduction below).[[CITE_Barseghyan (2015)|pp. 185-196]] Barseghyan does note the following: "if, for whatever reason, we need to compare two competing theories disregarding the current state of the mosaic, we are free to do so, but we have to understand that in actual scientific practice such abstract comparisons play no role whatsoever. Any theory assessment always takes into account the current state of the mosaic".[[CITE_Barseghyan (2015)|pp. 186]]  +
Barseghyan presents the following description of the deduction of the ''contextual appraisal theorem'': <blockquote> By the second law, in actual theory assessment a contender theory is assessed by the method employed at the time ... In addition, it follows from the first law for theories that a theory is assessed only if it attempts to enter into the mosaic; once in the mosaic, the theory no longer needs any further appraisal. In this sense, the accepted theory and the contender theory are never on equal footing, for it is up to the contender theory to show that it deserves to become accepted. In order to replace the accepted theory in the mosaic, the contender theory must be declared superior by the current method; to be “as good as” the accepted theory is not sufficient.</blockquote> [[File:Contextual-appraisal.jpg|607px|center||]]  +
Barseghyan (2015) provides another rich illustration for the Contextual Appraisal theorem with "the famous Eucharist episode which took place in the second half of the 17th century," which is a subtler important piece of the already difficult scientonomic case of the 18th-century transition from Cartesian to Newtonian natural philosophy.[[CITE_Barseghyan (2015)|p. 190]] Barseghyan describes the episode as follows: <blockquote>This episode has been often portrayed as a clear illustration of how religion affects science. In particular, the episode has been presented as though the acceptance of Cartesianism in Paris was delayed due to the role played by the Catholic Church. It is a historical fact that Descartes’s natural philosophy was harshly criticized by the Church. In 1663, his works were even placed on the Index of Prohibited Books and in 1671 his conception was officially banned from schools. Thus, at first sight, it may appear as though the acceptance of the Cartesian science in Paris was indeed hindered by religion. Yet, upon closer scrutiny, it becomes obvious that this interpretation is too superficial. When Descartes constructed his natural philosophy, it soon turned out that it had a very troubling consequence: it wasn’t readily reconcilable with the doctrine of transubstantiation accepted by the Aristotelian-Catholic scientific community of Paris. The idea of transubstantiation was proposed by Thomas Aquinas in his Summa Theologiae as an explanation of one of the Christian dogmas – namely, that of the Real Presence which states that, in the Eucharist, Christ is really present under the appearances of the bread and wine (i.e. literally, rather than metaphorically or symbolically). In his explanation of Real Presence, Aquinas employed Aristotelian concepts of substance and accident. In particular, he stated that in the Eucharist the consecration of bread and wine effects the change of the whole substance of the bread into the substance of Christ’s body and of the whole substance of the wine into the substance of his blood. Thus, what happens in the Eucharist is transubstantiation – a transition from one substance to another. As for the accidents of the bread and wine such as their taste, color, smell etc., Aquinas held that they remain intact, for transubstantiation doesn’t affect them. The doctrine of transubstantiation soon became the accepted Catholic explanation of the Real Presence. The problem was that Descartes’s theory of matter didn’t provide any mechanism similar to that stated in the doctrine of transubstantiation. To be more precise, it followed from Descartes’s original theory that transubstantiation was impossible. Recall that, according to Descartes, the only principal attribute of matter is extension: to be a material object amounts to occupying some space. It follows from this basic axiom that accidents such as smell, color, or taste are effects produced upon our senses by the configuration and motion of material particles. In other words, we simply cannot perceive the accidents of bread and wine unless there is bread and wine in front of us. What makes bread what it is, what constitutes its substance (to use Aristotle’s terms) is a specific combination of material particles; and the same goes for wine. Thus, when the substance of bread changes into the substance of Christ’s body, in the Cartesian theory, it means that some combination of particles which constitutes the bread changes into another combination of particles which constitutes Christ’s body. The key point here is that, in Descartes’s theory, it is impossible for Christ’s body to have the appearance of bread, since the appearance is merely an effect produced by that specific combination of particles upon our senses; Christ’s body and blood simply cannot produce the accidents of bread and wine. Obviously, on this point, Descartes’s theory was in conflict with the doctrine of transubstantiation. This conflict became the focal point of criticism of Descartes’s theory. To a 21st-century reader used to a clear-cut distinction between science and religion this may seem a purely religious matter. Yet, in the second half of the 17th century, this was precisely a scientific concern. The crucial point is that back then theology wasn’t separate from other scientific disciplines: the scientific mosaic of the time included many theological propositions such as “God exists”, “God is omnipotent”, or “God created the world”. These propositions where part of the mosaic just as any other accepted proposition. If we could visit 17th-century Paris, we would see that the dogma of Real Presence and the doctrine of transubstantiation weren’t something foreign to the scientific mosaic of the time – they were accepted parts of it alongside such propositions as “the Earth is spherical”, “there are four terrestrial elements”, “there are four bodily fluids” and so on. Thus, Descartes’s theory was in conflict not with some “irrelevant religious views” but with a key element of the scientific mosaic of the time, the doctrine of transubstantiation. More precisely, the problem was that back then no theory was allowed to be in conflict with the accepted theological propositions. This latter requirement was part of the method of the time. The requirement strictly followed from the then-accepted belief that theological propositions are infallible. Yet, eventually, the Cartesian natural philosophy did become accepted in Paris. If the laws of scientific change are correct, it could become accepted only with a special patch that would reconcile it with the doctrine of transubstantiation. It is not clear as to what exactly this patch was. To be sure, there is vast literature on different Cartesian solutions of the problem: the solutions proposed by Descartes, Desgabets, and Arnauld are all well known.359 However, I have failed to find a single historical narrative revealing which of these patches became accepted in the mosaic alongside the Cartesian natural philosophy circa 1700.360 Based on the available data, I can only hypothesize that the accepted patch was the one proposed by Arnauld in 1671. According to Arnauld’s solution, the Cartesian natural philosophy concerns only the natural course of events. However, since God is omnipotent, he is able to alter the natural course of events. Thus, he can turn bread and wine into the body and blood of Christ even if that is not something that can be expected naturally. Moreover, since our capacity of reason is limited, God can do things that are beyond our reason. Therefore, it is possible for Christ to be really present under the accidents of the bread and wine without our being able to comprehend the mechanism of that presence.361 One reason why I think that this could be the accepted patch is that a similar solution was also proposed by both Régis and Malebranche.362 The latter basically held that what happens in the Eucharist is a miracle and is not to be explicated in philosophical terms. In this context, the position of Malebranche is especially important for, at the time, his Recherche de la Vérité was among the main Cartesian texts studied at the University of Paris.363 Again, I cannot be sure that the accepted patch was exactly that of Arnauld and Malebranche; only closer scrutiny of the curriculum of Paris University in 1700-1740 as well as other relevant sources can settle this issue. Yet, the laws of scientific change tell us that there should be one patch or another – the Cartesian natural philosophy couldn’t have been accepted without one. In short, initially the Cartesian theory didn’t satisfy the requirements implicit in the mosaic of the time, namely it was in conflict with one of those propositions which were not supposed to be denied. Thus, the acceptance of Descartes’s theory was hindered not because “dogmatic clergy” didn’t like it on some mysterious religious grounds, but because initially it didn’t satisfy the requirements of the time. This point will become clear if we turn our attention to the scientific mosaic of Cambridge of the same time period. Circa 1660, the mosaics of Paris and Cambridge were similar in many respects. For one, they both included all the elements of the Aristotelian-medieval natural philosophy. In addition, they shared the basic Christian dogmas, such as the dogma of Real Presence. Yet, they were different in one important respect: whereas the mosaic of Paris included the propositions of Catholic theology, the mosaic of Cambridge included the propositions of Anglican theology. Namely, the Cambridge mosaic didn’t include the doctrine of transubstantiation. In that mosaic, the Cartesian theory was only incompatible with the Aristotelian-medieval natural philosophy which it aimed to replace. This difference proved crucial. Whereas reconciling the Cartesian natural philosophy with the doctrine of transubstantiation was a challenging task, reconciling it with the dogma of Real Presence wasn’t difficult. One such reconciliation was suggested by Descartes himself and was developed by Desgabets. The idea was that the bread becomes the body of Christ by virtue of being united with the soul of Christ, while the material particles of the bread remain intact. For the Catholic, this solution was unacceptable, for it denied the doctrine of transubstantiation and, therefore, was a heresy. Yet, for the Anglican, this solution could be acceptable, since the doctrine of transubstantiation wasn’t part of the Anglican mosaic. Thus, whereas the Catholic was faced with a seemingly insurmountable problem of reconciling the Cartesian natural philosophy with the doctrine of transubstantiation, the Anglican didn’t have that problem. This explains why the whole Eucharist case was almost exclusively a Catholic affair. This episode illustrates the main point of the contextual appraisal theorem: a theory is assessed only in the context of a specific mosaic and the outcome of the assessment depends on the state of the mosaic of the time.[[CITE_Barseghyan (2015)|p. 190-196]]</blockquote>  
Barseghyan's deduction of the ''contextual appraisal theorem'' can be further understood through a brief example. Consider a situation wherein "the proponents of some alternative quantum theory argue that the currently accepted theory is no better than their own quantum theory".[[CITE_Barseghyan (2015)|p. 185]] But, importantly, we notice that they are taking theory assessment out of its ''historical context''! "Particularly," Barseghyan comments, "they ignore the phenomenon of scientific inertia – they ignore that, in order to remain in the mosaic, the accepted theory doesn’t need to do anything (by ''the first law'' for theories) and that it is their obligation to show that their contender theory is better (by ''the second law'')".[[CITE_Barseghyan (2015)|p. 185]]  +
The depiction of Galileo as a hero, standing up against church authorities to present his "clearly superior" position, is well-known.[[CITE_Barseghyan (2015)|p. 187]] However, as Barseghyan rightly notes, it fails to take the contemporaneous ''scientific mosaic'' of Galileo's community into account. The traditional account, placing both theories aganist each other in a vacuum, "failed to appreciate both that theory assessment is an assessment of a proposed modification and that a theory is assessed by the method employed at the time. Once we focus our attention on the state of the scientific mosaic of the time," though, "it becomes obvious that the scientific community of the time simply couldn’t have acted differently".[[CITE_Barseghyan (2015)|p. 188]] Let's consider the ''scientific mosaic'' circa the 1610s. It consisted of many interconnected Aristotelian-medieval theories, including ''geocentrism,'' which "was a deductive consequence of the Aristotelian law of natural motion and the theory of elements".[[CITE_Barseghyan (2015)|p. 188]] So, "it was impossible to simply cut geocentrism out of the mosaic and replace it with heliocentrism – the whole Aristotelian theory of elements would have to be rejected as well. And it was only made more difficult because "the theory of elements itself was tightly connected with many other parts of the mosaic," such as the ''possibility of transformation of elements'' and the medical theory of the time (four humours).[[CITE_Barseghyan (2015)|p. 189]] "In short," summarizes Barseghyan, "in order to make the rejection of geocentrism possible, a whole array of other elements of the Aristotelian-medieval mosaic would have to be rejected as well".[[CITE_Barseghyan (2015)|p. 189]] Now by ''the first law for theories'' and ''the theory rejection theorem'', "only the acceptance of an alternative set of theories could defeat the theories of the Aristotelian-medieval mosaic".[[CITE_Barseghyan (2015)|p. 189]] "Unfortunately for Galileo," concludes Barseghyan, "at the time there was no acceptable contender theory comparable in scope with the theories of the Aristotelian-medieval mosaic ... Galileo didn’t have an acceptable replacement for all the elements of the mosaic that had to be rejected together with geocentrism ... The traditional interpretation of this historical episode failed to appreciate this important point and, instead, preferred to blame the dogmatism of the clergy".[[CITE_Barseghyan (2015)|p. 189]] Another key problem with the typical presentation of this episode is its assessment, "not by the implicit requirements of the time, but by the requirements of the hypothetico-deductive method, which became actually employed a whole century after the episode took place. Namely, Galileo was said to have shown the superiority of the Copernican heliocentrism by confirming some of its novel predictions," which, by the traditional account, was considered "a clear-cut indication of the superiority of the Copernican hypothesis".[[CITE_Barseghyan (2015)|p. 189]] Yet, Barseghyan's more careful study of the episode reveals the following: "the requirements of hypothetico-deductivism had little in common with the actual expectations of the community of the time. Although the task of reconstructing the late Aristotelian-medieval method of natural philosophy is quite challenging and may take a considerable amount of labour, one thing is clear: the requirement of confirmed novel predictions was not among implicit expectations of the community of the time. Back then, theories simply didn’t get assessed by their confirmed novel predictions".[[CITE_Barseghyan (2015)|pp. 189-90]] And we note that important point becomes apparent through the ''contextual appraisal theorem''.  
The core questions of a [[Discipline| discipline]] are those general questions that are essential to a discipline, having the power to define it and establish its boundaries within a hierarchy of questions. They are identified as such in the discipline's [[Delineating Theory| delineating theory]].[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic| scientific mosaic]] consists of [[Theory| theories]] and [[Question| questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] Questions form hierarchies in which more specific questions are [[Subquestion| subquestions]] of broader questions. Theories enter into this hierarchy as well since questions presuppose theories, and theories are answers to questions. It is the position of core questions within such hierarchies that confer upon them the power to define and establish the boundaries of a discipline by indicating which questions and theories are included. For example, the question 'how did living things originate as a result of evolution?' is a core question of evolutionary biology.  +
A core theory of a [[Discipline| discipline]] is a [[Theory| theory]] presupposed by the discipline's [[Core Question| core questions]].[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic| scientific mosaic]] consists of [[Theory| theories]] and [[Question| questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] Questions constitute hierarchies where more specific questions are [[Subquestion| subquestions]] of broader questions. Within this hierarchy, certain general questions play a special role as core questions. These questions are essential to a discipline, and have the power to identify it and determine its boundaries. For example, a core question of evolutionary biology would be 'how did living species originate as a result of evolution?'. Questions always presuppose theories, which endow them with semantic content. Those presupposed by a discipline's core questions, are that discipline's core theories. For our example, the theory in question would be The neo-Darwinian theory of evolution by natural selection.  +
D
This somewhat simplistic definition of ''definition'' is meant to highlight that definitions are themselves theories (statements, propositions). As a result, any [[Epistemic Stances|stance]] that can be taken by [[Epistemic Agent|epistemic agents]] towards theories can also be taken towards definitions.  +
According to Barseghyan, definitions are an integral part of the process of scientific change.[[CITE_Barseghyan (2018)]]  +
According to Barseghyan, definitions are essentially a species of theories.  +
Nicholas Overgaard explains the topic  +
One can specify a [[Discipline|discipline]] in terms of a set of its [[Core Question| core questions]]. A delineating theory is a second-order [[Theory|theory]] identifying this set of core questions, and allowing it to exist as an [[Epistemic Element| epistemic element]] within the [[Scientific Mosaic|mosaic]].[[CITE_Patton and Al-Zayadi (2021)]] For example, the delineating theory of modern physics might identify 'How do matter and energy behave?' as a core question of modern physics.  +
[[The Law of Theory Demarcation (Sarwar-Fraser-2018)|The law of theory demarcation]] states that a theory is deemed as scientific only if it satisfies the demarcation criteria employed by the epistemic community at the time. [[Theory Acceptance (Fraser-Sarwar-2018)|The definition of theory acceptance]] suggested by [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]] states that an accepted theory is a ''scientific'' theory that is taken to be the best available description or prescription of its object of study. It follows from these two premises that whenever a theory is accepted, it must also have satisfied the demarcation criteria of the time. After all, if it did not, then the definition of theory acceptance is contradicted. Therefore, if the definition of theory acceptance and the law of demarcation criteria are accepted, then it must also be accepted that accepted theories satisfy the criteria of demarcation. This demarcation-acceptance synchronism is presented somewhat more formally in the following diagram: [[File:Demarcation-Acceptance_Synchronism_theorem_deduction_(Fraser-Sarwar-2018).png|761px|center||]]  +
Hakob Barseghyan's lecture on Cartesian Worldview  +
According to [[Zoe Sebastien|Sebastien]]'s definition of the term, descriptive theories aim at ''describing'' a certain object under study, where ''describe'' is understood in the broad sense and includes ''explain'', ''predict'', etc. Thus, the term encompasses theories that attempt to describe a certain phenomenon, process, or state of affairs in the past, present, or future. All of the following propositions would qualify as ''descriptive'': * The acceleration of an object as produced by a net force is directly proportional to the magnitude of the net force, in the same direction as the net force, and inversely proportional to the mass of the object. (''A general description of a phenomenon''.) * Paris is the capital of France. (''A description of a current state of affairs''.) * Augustus was the first emperor of the Roman Empire. (''A description of a past state of affairs''.) * Halley's comet will next appear in the night sky in the year 2062. (''A description of a future event, i.e. a prediction''.) Typically, most propositions produced by both empirical and formal sciences would fall under the category of ''descriptive theory''. Among others, this includes substantive propositions of physics, chemistry, biology, psychology, sociology, and economics, as well those of historical sciences. Excluded from this category are [[Normative Theory|normative propositions]], such as those of methodology, ethics, or aesthetics.  +
According to Barseghyan, many theories attempt to describe something. Thus, there are descriptive theories.[[CITE_Barseghyan (2015)|p. 5]]  +
A discipline ''A'' is characterized by a non-empty set of [[Core Question| core questions]] ''Q<sub>CA</sub>'' and a [[Delineating Theory| delineating theory]] stating that ''Q<sub>CA</sub>'' are the core questions of the discipline.[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic|scientific mosaic]] consists of [[Theory|theories]] and [[Question|questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] As a whole, a discipline ''A'' consists of a set of accepted questions ''Q<sub>A</sub>'', and the theories which provide answers to those questions, or which those questions presuppose. [[CITE_Patton and Al-Zayadi (2021)]] Questions form hierarchies, with more specific questions being [[Subquestion| subquestions]] of more general questions. Theories find a place in these hierarchies, since each theory is an attempt to answer a certain question, and each question presupposes certain theories. Because of such hierarchical relations, it is possible to characterize a discipline by identifying a set of [[Core Question| core questions]], ''Q<sub>CA</sub>''. These core questions are judged by some [[Epistemic Agent| agent]] to be related to one another, essential to a discipline, and definitive of its boundaries. The other questions of a discipline are subquestions of its core questions. A set, as such, can't be part of a scientific mosaic consisting of theories and questions. We, therefore, take a discipline to be defined by a [[Delineating Theory| delineating theory]] that identifies the set of core questions ''Q<sub>CA</sub>'' characterizing that discipline.  +
[[Epistemic Stances Towards Theories|Theories]] and [[Epistemic Stances Towards Questions| questions]] can both be the subject of the epistemic stances of [[Epistemic Agent|epistemic agents]]. [[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Patton (2019)]] [[Discipline| Disciplines]] like biology, physics, and astrology can also be the subject of such stances. For example, biology and physics are accepted by the scientific community of the modern world as disciplines, but astrology is rejected. In our definition, a discipline is said to be accepted by an epistemic agent if that agent accepts the [[Core Question| core questions]] specified in the discipline's [[Delineating Theory|delineating theory]], as well as the delineating theory itself.[[CITE_Patton and Al-Zayadi (2021)]] This definition takes discipline acceptance to be derivative of [[Theory Acceptance|theory acceptance]] and [[Question Acceptance|question acceptance]]. It requires first, that an agent accepts the delineating theory that specifies that a particular set of core questions characterize a discipline. For example, the scientific community accepts that the question 'how do matter and energy behave? is a core question of modern physics. The community also accepts the question itself. Therefore, they can be said to accept physics as a discipline. The scientific community of the modern world also accepts that the question 'how do the positions of celestial objects at the time of one's birth influence one's character?' is a core question of astrology. However, they do not accept the question itself, because they reject its supposition that such an influence exists. Thus, the scientific community rejects the discipline of astrology.  +
Nicholas Overgaard explains the topic  +
No [[Theory|theory]] acceptance may take place in a genuinely dogmatic [[Scientific Community|community]]. "Namely," as is noted in [[Barseghyan (2015)]], Barseghyan notes, when introducing '''the theory rejection theorem''' in [[Barseghyan (2015)]], "theory change is impossible in cases where a currently accepted theory is considered as revealing the final and absolute truth".[[CITE_Barseghyan (2015)|p. 165]]  +
Suppose a community has an accepted theory that asserts that it is the final and absolute truth. By the [[The Third Law (Barseghyan-2015) |Third Law]] we deduce the method: accept no new theories ever. By the [[The Second Law|Second Law]] we deduce that no new theory can ever be accepted by the employed method of the time. By the [[The First Law (Barseghyan-2015)|First Law]], we deduce that the accepted theory will remain the accepted theory forever.[[CITE_Barseghyan (2015)|p. 165-167]] [[File:Dogmatism-theorem.jpg|607px|center||]]  +
Barseghyan emphasizes that with the [[Dogmatism No Theory Change theorem]], "we can easily distinguish between genuinely dogmatic communities and communities which only ''appear'' dogmatic".[[CITE_Barseghyan (2015)|p. 166]]. He presents the following example: <blockquote>It was once believed that the medieval scientific community with its Aristotelian mosaic was a dogmatic community, for it (allegedly) held on to its theories at all costs and disregarded all new theories. Yet, upon closer scrutiny it becomes obvious that the Aristotelian-medieval community was anything but dogmatic. Had the medieval community indeed taken a genuinely dogmatic stance, no scientific change would have been possible in their mosaic. But it is a historical fact that the Aristotelian-medieval mosaic was gradually changing especially in the 16th and 17th centuries; towards the end of the 17th century many of its key elements were replaced by new elements. Finally, by circa 1700 the Aristotelian-medieval system of theories was replaced with those Descartes and Newton. This would have been impossible had the theories of the mosaic been actually taken as revealing the final truth. Thus, the Aristotelian-medieval community was not dogmatic. For some real examples of dogmatic communities think of those communities which, having started with some dogmas, fanatically held on to those dogmas and never considered their modification possible.[[CITE_Barseghyan (2015)|p. 166-7]]</blockquote>  +
A '''substantive method''' is one that presupposes at least one contingent proposition; one that depends on the state of something in the external world. According to our understanding of contingent propositions, all such propositions are '''fallible'''. As such, any substantive method will necessarily presuppose at least one contingent proposition, and is therefore fallible. Thus, by the '''synchronism of method rejection''' theorem, the rejection of a theory can result in the rejection of a method, rendering all substantive methods dynamic.  +
Here is the deduction as it appears in Barseghyan (2015):[[CITE_Barseghyan (2015)|p. 220]] <blockquote> According to the thesis of fallibilism, accepted in the contemporary epistemology, no contingent proposition (i.e. proposition with empirical content) can be demonstratively true. Therefore, since substantive methods are based on fallible contingent propositions, they cannot be immune to change. Imagine a typical mosaic with an accepted theory and a method that implements the constraints imposed by this theory. It is obvious that the method in question is necessarily substantive (by the definition of substantive method). Now, suppose that there appears a new theory that manages to satisfy the current requirements and, as a result, replaces the accepted theory in the mosaic. Naturally, this new theory imposes new abstract constraints (by the third law). It is conceivable that these new abstract constraints are incompatible with the requirements of the current method. In such an instance, the old method will be replaced by the new one (by the method rejection theorem). In short, a rejection of theories can trigger a rejection of the substantive method. This idea has been already implicit in the synchronism of method rejection theorem. Thus, there are no guarantees that an employed substantive method will necessarily remain employed ad infinitum. Consequently, any substantive method is necessarily ''dynamic''.</blockquote> [[File:Dynamic-substantive-methods.jpg|607px|center||]]  +
One example is the transition from the controlled trial method to the blind trial method and then to the double blind trial method. Blind trials were introduced as an implementation of the more abstract method that required to account for the placebo effect on patients when testing drugs. Once the placebo effect became known, the method changed. And after, when it became known that the experimenter's bias also had a role on patients when testing drugs, the method changed once more, from blind to double-blind.  +
Another example is the transition from the Aristotelian-Medieval Method to the Hypothetico-Deductive Method. While in the former it was assumed that there was an essential difference between natural and artificial, and that therefore the results of experiments, being artificial, were not to be trusted when trying to grasp the essence of things, in both the Cartesian and Newtonian worldviews such a distinction was not assumed and therefore experiments could be as reliable as observations when trying to understand the world. Once the theories changed (from the natural/artificial distinction to no such distinction) the methods changed too (from no-experiments to the experimental method).  +
E
According to Oh, there is such a thing as element decay.[[CITE_Oh (2021)]]  +
Element decay is not a scientonomic phenomenon and, thus, is outside of the scope of scientonomy.  +
A method is said to be ''employed'' at time ''t'' if, at time ''t,'' theories became accepted only when their acceptance is permitted by the method. [[CITE_Barseghyan (2015)|p. 53]] ''The second law'' of theory acceptance is a direct consequence of ''employed method'' as it is defined.  +
According to this definition of the term, ''employed method'' is nothing but the actual expectations of a certain community at a certain time. This is in tune with the actual scientonomic usage of the term. It is safe to say that this definition is tacitly used throughout Barseghyan's [[Barseghyan (2015)|''The Laws of Scientific Change'']]. For instance, when he says that the method of intuition schooled by experience was employed by the community of Aristotelian-Medieval natural philosophers, he actually means that this community expected new theories to be intuitively true.[[CITE_Barseghyan (2015)|pp. 143-145]][[CITE_Patton, Overgaard, and Barseghyan (2017)|p. 35]] When he says that the double-blind trial method is currently employed in drug testing, he means that "the community expects new drugs to be tested in double-blind trials".[[CITE_Patton, Overgaard, and Barseghyan (2017)|p. 35]][[CITE_Barseghyan (2015)|pp. 134-142]] Originally, this tacit definition of employed method has been repeatedly conflated with [[Employed Method (Barseghyan-2015)|the official definition of the term]] given on page 54 of ''The Laws of Scientific Change''.[[CITE_Barseghyan (2015)|pp. 54,144,145]] However, a community’s expectations were not mentioned in Barseghyan's [[Employed Method (Barseghyan-2015)|original definition]] of employed method. This new definition of ''employed method'' as "expectations of the community" was suggested to fix this conflation.  +
[[Joshua Allen]] makes a case for this broad definition of the term. According to Allen:[[CITE_Allen (2023)|pp. 75-76]] <blockquote> the broader the definition, the more likely it is to account for all epistemic practices conducted throughout history and across all geographies. Any narrowing of the definition risks excluding epistemically relevant practices that we may have simply failed to consider. The above broad framing, therefore, allows for the best chance at covering all actions that one would normally consider epistemic. It is agnostic towards the precise characteristics that may accompany an epistemic action, beyond what could reasonably be assumed to be the bare minimum, an epistemic agent taking an action that somehow involves an epistemic element. This definition has the additional benefit of aligning well with other sister categories in the scientonomic ontology. An epistemic stance, for instance, is understood in scientonomy to refer to the attitude of an epistemic agent towards an epistemic element. Having such similarly phrased definitions across basic notions in scientonomy brings a sense of symmetry to the ontology.</blockquote> Allen makes a case that if we were to go with a narrower definition, we would risk excluding such potentially relevant actions as publishing:[[CITE_Allen (2023)|p. 77]] <blockquote> The act of publishing a textbook does not seem directly to involve an intent to generate or assess epistemic elements. Similarly, while one would be hard-pressed not to view the spreading of knowledge as epistemically relevant, it could be difficult to confirm that an intent to generate or assess epistemic elements is involved. In both of these cases, it is not obvious how they could qualify as epistemic actions under the narrow definition, as they are not necessarily aiming to generate or assess epistemic elements. Yet, actions of publishing textbooks or spreading knowledge more generally could easily be epistemically relevant without being accompanied by such an intent, by way of their place within a broader tapestry of specific scientific practices.</blockquote>  
According to Allen, epistemic actions are a key part of everyday epistemic practice.  +
An ''epistemic agent'' acts in relation to [[Epistemic Element|epistemic elements]] such as theories, questions, and methods. The actions of an epistemic agent amount to taking [[Epistemic Stance|epistemic stances]] towards these elements, such as accepting or pursuing a theory, accepting a question, or employing a method. The stances of an epistemic agent must be ''intentional''. To be so, they must satisfy the following conditions: # the agent must have a semantic understanding of the propositions that constitute the epistemic element in question and of its available alternatives; and # the agent must be able to choose from among the available alternatives with reason, and for the purpose of acquiring knowledge.[[CITE_Patton (2019)]]  +
According to Barseghyan, epistemic agents are an essential part of the process of scientific change, as they take stances towards epistemic elements.  +
This definition attempts to capture what is arguably the key feature of epistemic communities - their collective intentionality to study/know the world. This feature, according to the definition, distinguishes epistemic communities from [[Non-Epistemic Community|non-epistemic communities]], such as political, economic, or familial communities. To use [[Nicholas Overgaard|Overgaard]]'s own example, "it is clear that an orchestra is a community: the various musicians can be said to have a collective intentionality to play a piece of music" and yet its collective intentionality is different from that of knowing the world.[[CITE_Overgaard (2017)|p. 59]]  +
According to [[Nicholas Overgaard|Overgaard]], communities that do not have a collective intentionality to know the world can still have sub-communities that do have such an intentionality. Overgaard illustrates this with the example of Google, a company that can be considered a [[Non-Epistemic Community|non-epistemic community]] as its collective intentionality is that to make profit. Yet, as an innovative company, Google has many sub-communities which do have a collective intentionality to know the world, such as "a research and development team trying to better know Internet technologies, or a marketing team trying to better know how to reach consumers".[[CITE_Overgaard (2017)|p. 59]] By [[Epistemic Community (Overgaard-2017)|definition]], these sub-communities are [[Epistemic Community|epistemic]]. Thus, argues Overgaard, it is possible for an epistemic community to be the sub-community of a non-epistemic community.  +
The claim of the existence of epistemic communities can be traced back to Overgaard, who presented the distinction between epistemic and non-epistemic communities in his [[Overgaard (2017)|''A Taxonomy for the Social Agents of Scientific Change'']].[[CITE_Overgaard (2017)]]  +
According to Barseghyan, epistemic community is an epistemic agent, i.e. it is capable of taking [[Epistemic Stance|epistemic stances]] towards [[Epistemic Element|epistemic elements]].[[CITE_Barseghyan (2018)]]  +
The notion of epistemic agency implies that an agent takes epistemic stances ''intentionally''. That is: * the agent has a semantic understanding of the propositions that constitute the epistemic element in question, and of its alternatives, and * the agent is capable of choosing among them with reason, and with the goal of acquiring knowledge. Communities can meet these conditions. An [[Epistemic Community|''epistemic community'']], by definition, has a collective intentionality to know the world and can thus be said to pursue the goal of acquiring knowledge.[[CITE_Overgaard (2017)]] In order for a community to be a communal epistemic agent, it must be the case that its epistemic stances belong to the community as a whole, rather than simply to its constituent members. To understand how this can be, we must consider some general properties of systems with multiple interacting parts. Such systems, if their parts are appropriately organized in relation to one another, often exhibit ''emergent properties''.[[CITE_Bedau (2003)]][[CITE_Kim (1999)]][[CITE_O'Connor and Yu Wong (2015)]][[CITE_Wimsatt (2006)]][[CITE_Wimsatt (2007)|pp. 274-312]] William Wimsatt defined the emergent properties of a system as those that depend on the way its parts are organized.[[CITE_Wimsatt (2006)]][[CITE_Wimsatt (2007)|pp. 274-312]] ''Aggregate systems'' as those in which the parts do not bear an organized relationship to one another. The parts all play similar causal roles and can be interchanged or rearranged without consequence. The behaviour of the whole is just an additive, statistical consequence of that of its parts and no emergent properties are present. A jumbled pile of electronic parts is an example of an aggregate system. Its properties, like its mass and its volume, are just the sum of the masses and volumes of all its parts. A ''composed system'' possesses new emergent properties due to the way in which its parts are organized in relation to one another. A radio assembled by arranging electronic parts in the proper relation to one another is an example of a composed system. The ability to be a radio is an emergent property because none of the radio's parts exhibit it by itself. The parts are organized so that each one plays its own distinctive, specialized role in producing the emergent property. A number of authors have argued that epistemic communities are organized so as to give rise to emergent properties.[[CITE_List and Pettit (2006)]][[CITE_Palermos and Pritchard (2016)]][[CITE_Palermos (2016)]][[CITE_Theiner (2015)]][[CITE_Theiner, Allen, and Goldstone (2010)]][[CITE_Theiner and O'Connor (2010)]] Wimsatt's ideas have been specifically applied to epistemic communities by Theiner and O'Connor. [[CITE_Theiner and O'Connor (2010)]] An epistemic community is an organized system of individual epistemic agents, each of which makes its own distinctive contribution to the epistemic stances taken by the communal agent as a whole. These roles are determined by institutional and other forms of organization of the communal agent, and involve varied and complementary areas of specialized knowledge. Collective decision-making processes and institutional frameworks interact with and influence the views of individual community members. These allow a community to take epistemic stances towards epistemic elements that are distinct from those its individual members might take if left to their own devices. In an analysis of legal decision-making processes, Tollefsen [[CITE_Tollefsen (2004)]] has shown that there are a variety of circumstances under which a community's epistemic stances are not the simple aggregate of its individual member's stances. Longino [[CITE_Longino (1990)]][[CITE_Longino (2019)]][[CITE_Longino (1996)]] maintains that, when communities have normatively appropriate structures, critical interactions among community members holding different points of view mitigate the influence of individual subjective preferences and allow communities to achieve a level of objectivity in their taking of epistemic stances that are not generally possible for individual agents. Barseghyan [[CITE_Barseghyan (2015)|pp. 43-52]] has argued that the methods used by individual prominent scientists often, in fact, do not coincide with those of their community and that a community's acceptance of a theory is a function of the methods employed by that community rather than individual idiosyncrasies. Thus, it appears that most epistemic communities fit the requirements for communal epistemic agents.  
According to Barseghyan and Levesley, questions can have epistemic presuppositions.  +
Fraser and Sarwar argued that, as an epistemic stance, compatibility can be taken towards epistemic elements of all types.[[CITE_Fraser and Sarwar (2018)|p. 70]]  +
The only stance that an epistemic agent can take towards a method is [[Employed Method|''employment'']], i.e. a method is either employed or unemployed by an agent in theory evaluation.  +
In his [[Barseghyan (2018)|"Redrafting the Ontology of Scientific Change"]], Barseghyan argued that since [[Normative Theory|norms]] are a subtype of [[Theory|theory]], all the epistemic stances that can in principle be taken by an epistemic agent towards theories of all types can also be taken towards norms. In addition to these more universal stances, norms can also be [[Norm Employment|employed]], i.e. they have the capacity of constituting the actual expectations of the epistemic agent. This applies to norms of all types.[[CITE_Barseghyan (2018)]]  +
The stance of norm employment explained by Hakob Barseghyan  +
Rawleigh argued that, just like theories, [[Question|questions]] too can be [[Question Acceptance|accepted]] or unaccepted. A question can be accepted by an agent at one period at not accepted by another.  +
Consider, for instance, the question "what is the distance to the sphere of the stars?" which was once accepted as a legitimate topic of inquiry, but is no longer accepted. Similarly, the question "what is the mechanism of evolution of species?" is accepted nowadays, but wasn't accepted in the 17th century. Thus, we can say that questions acceptance is the stance that epistemic agents take towards questions.  +
It is argued by Sarwar and Fraser that, in addition to the already accepted epistemic stances, the stance of ''scientificity'' can be taken towards theories.[[CITE_Sarwar and Fraser (2018)]]  +
According to Barseghyan, acceptance as an epistemic stance can be taken towards theories.[[CITE_Barseghyan (2015)|pp. 30-32]]  +
According to Barseghyan, the epistemic stance of pursuit can be taken towards theories, i.e. an epistemic agent can find a theory pursuitworthy.[[CITE_Barseghyan (2015)|pp. 30-40]]  +
According to Barseghyan, the epistemic stance of use can be taken towards theories, i.e. an epistemic agent can find a theory useful.[[CITE_Barseghyan (2015)|pp. 30-40]]  +
A physical object or system is an epistemic tool for an [[Epistemic Agent|epistemic agent]] ''iff'' there is a procedure by which the tool can provide an acceptable source of knowledge for answering some [[Question|question]] under the employed [[Method|method]] of that agent. Examples of epistemic tools include rulers, thermometers, the Large Hadron Collider, the Hubble Space Telescope, a written text, a computer, a blackboard and chalk, a crystal ball, etc.  +
There are several different senses in which one might take the concept of scientific error. One is the absolute sense. From our modern perspective, we might judge the geocentric [[Aristotle| Aristotelean-Ptolemaic cosmology's]] claim that the earth is stationary at the center of the universe as an error [[CITE_ Allchin (2001)]]. The sense of error we are interested in here is not this absolute sense of error as judged from a future perspective. Instead, our definition takes the perspective of the historical [[Epistemic Agent| agent]] and the [[Mechanism of Method Employment | method employed]] by the agent at that time. Our definition is grounded in [[Mechanism of Theory Acceptance| the law of theory acceptance]]. When a [[Theory| theory]] is erroneously accepted, the [[Nature of Appraisal| assessing]] agent believes it has satisfied the requirements of their employed method when, in fact, it has not. Error may be due to an honest mistake by the epistemic agent that created the theory, or to scientific misconduct--actions which the theory-creator agent is aware violate the epistemic and moral [[Normative Theory | norms]] of scientific inquiry accepted at the time.  +
The analysis of several several instances of scientific error by [[Sarah Machado-Marques|Machado-Marques]] and [[Paul Patton|Patton]] suggest that the handling of these instances by scientists is in accord with the theory rejection theorem. Handling of error involves, according to this view, not only a rejection of some of the propositions that are considered to be accepted erroneously but also an acceptance of some new propositions. In some cases, an erroneously accepted ''first-order'' proposition is replaced by another ''first-order'' proposition incompatible with it. In other cases, an erroneously accepted ''first-order'' proposition is replaced by a ''second-order'' proposition stating the lack of sufficient reason for accepting the first-order proposition. According to this view, the handling of erroneously accepted theories involves their replacement with other theories; the handling of scientific error is therefore in full accord with the theory rejection theorem.  +
Nicholas Overgaard explains the topic  +
This category encompasses that knowledge which hasn't been openly formulated by the agent but can, in principle, be open formulated. As such the category is agent-relative. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +
According to this definition, knowledge is said to be ''explicit'' if it has been openly formulated by the agent in question. As such the notion of ''explicit'' is agent-relative. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +
F
Scientonomy Workshop, February 25, 2023  +
G
Allen makes a case that while many types of epistemic actions are local, i.e. available to only ''some'' agents at ''some'' periods, there are also global epistemic actions. According to Allen, "taking a stance of acceptance (i.e., accepting) seems to be a global action, as without this epistemic action no process of scientific change seems possible". [[CITE_Allen (2023)|p. 79]]  +
Allen makes a case that there is such a thing as a global epistemic action, (e.g. ''accepting a theory'').[[CITE_Allen (2023)|p. 79]]  +
Allen makes a case that epistemic actions can be global or local.[[CITE_Allen (2023)]]  +
In the scientonomic workflow, the goals of peer review are to assesses a paper for pursuitworthiness of the modifications suggested in the paper. Thus, peer reviewers should not evaluate submissions for acceptability, but only for pursuitworthiness.  +
In [[Nicholas Overgaard|Overgaard]]'s taxonomy, the term ''group'' refers to the most basic societal entity - a set of two or more people. As such, it is meant to play the role of the most abstract class which has two sub-classes - [[Community (Overgaard-2017)|community]] and [[Accidental Group (Overgaard-2017)|accidental group]].[[CITE_Overgaard (2017)]]  +
H
The editors should be granted official ''housekeeping rights'' to create and modify the necessary pages of the encyclopedia to handle ripple effects. Specific handling of ripple effects should depend on whether the additional change is ''implied'' by the modification or whether it is conceivable to accept the modification without accepting the additional change. There are two possible scenarios. # The additional change ''does not alter'' the accepted body of scientonomic knowledge, but merely explicate what is implicitly accepted by the community. In such cases, the editors should create and/or modify the necessary pages of the encyclopedia to handle the ripple effect. # The additional change ''alters'' the accepted body of scientonomic knowledge (i.e. it is possible to accept the original modification without accepting the additional change). In such cases, the editors should introduce these additional changes in a regular fashion by registering them as new suggested modifications for the community to comment. As put by Shaw and Barseghyan: <blockquote>The key rule of thumb here is this: is it conceivable to accept the modification without accepting the ripple effect change in question? If so, then this new ripple effect change should be registered as a new suggested modification and discussed. If not, then no new suggested modification is necessary; instead, the editors should modify the encyclopedia to document the ripple effect.[[CITE_Shaw and Barseghyan (2019)|p. 10]]</blockquote>  +
Hierarchical authority delegation is a sub-type of [[Multiple Authority Delegation (Loiselle-2017)|multiple authority delegation]]. It describes a situation in which a community delegates authority over some [[Question|question]] to multiple communities, but at different degrees of authority. Consider a case of multiple authority delegation in which either expert A OR expert B might be consulted. If the word of expert A is always accepted over the word of expert B, we have a case of hierarchical authority delegation. Here is an example from the art world. The Modigliani ''catalogue raisonée'' by Ambrogio Ceroni is widely regarded by the art market as being the most reliable source when it comes to matters of Modigliani attribution. That being said, it is also widely accepted that the catalogue is incomplete. In 1997, Modigliani scholar Marc Restellini began creating a new catalogue raisonée for the artist. Between 1997 and 2015 (when Restellini's project was abandoned), the art market held a relationship of hierarchical authority delegation with Ceroni and Restellini. If the painting was listed in the Ceroni catalogue, it was considered authentic, regardless of Restellini's opinion. If it was not in the Ceroni catalogue but ''was'' considered authentic by Restellini, then it was accepted as such by the art market. The fact that both Ceroni and Restellini were valued as independent authorities makes this an instance of multiple authority delegation; the fact that Ceroni's word was valued over Restellini makes it a case of hierarchical authority delegation.  +
The definition tweaks the [[Hierarchical Authority Delegation (Loiselle-2017)|original definition]] of the term by [[Mirka Loiselle|Loiselle]] to ensure that the relationship of hierarchical authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
To reconstruct the state of a ''mosaic'' at time ''t,'' it is necessary to understand which theories were accepted at the time, and which methods were employed at the time. [[CITE_Barseghyan (2015)|p. 12]] This process is the concern of historical (empirical) questions. '''Theory of scientific change,''' although closely linked is concerned with theoretical questions. ''History of Scientific Change'' is one of the key concepts in current scientonomy. Thus, its proper definition is of great importance to recognize a descriptive theory of ''scientific change.''  +
I
Implication aims to capture the idea of one theory "following" from another. Although this idea is usually associated with that of deduction, sometimes deduction is too strict. When a theory constitutes evidence for another theory, that theory may imply the other depending on the strength of the evidence. There is no cutoff point for how strong the evidence needs to be that is shared by all agents. Instead, each agent has some ''rules of implication'' which determine when a theory ''implies'' another. This may be rules of logical entailment, of Bayesian confirmation theory, or of a detective's instinct. Having a notion of implication greatly clarifies an epistemic agent's theoretical thinking. Each agent may have their own rules of implication that differ from the modern concept of logical entailment or of deduction. Furthermore, implication clarifies what "deducible" means in [[The Third Law]]: a theory is deducible from another set of theories if it is implied by that set of theories.  +
Implicit is the opposite of ''explicit'' and, thus, it doesn't require more than a very minimalist definition. This definition creates a strong link between the two concepts and won't require any changes in the definition of ''implicit'' when the respective definition of ''explicit'' happens to change.  +
One putative method of learning the [[Employed Method|''employed method'']] of the time is by studying texts concerning scientific [[Methodology|''methodology'']] to learn what method was prescribed by the [[Scientific Community|community]] or advocated by ''great scientists''. However, such indicators can yield incorrect results. During the second half of the eighteenth century and the first half of the nineteenth century, the scientific community explicitly advocated the ''empiricist-inductivist'' methodology championed by [[Isaac Newton]]. This methodology held that new theories should be deduced from phenomena, and that unobservable entities should not be posited. However, the historical record actually shows that several theories positing unobservable entities did, in fact, become accepted during this period. These include Benjamin Franklin's theory of electricity, which posited an unobservable ''electric fluid'', the ''phlogiston'' theory of combustion, and the theory that light is a waveform in a ''luminiferous ether''. Thus the ''accepted methodology'' [[Scope of Scientonomy - Explicit and Implicit|does not necessarily indicate]] the ''employed method'' of the time. [[CITE_Barseghyan (2015)|pp. 53-54]] More promising indicators of method employment are ''indirect'', via inference from historical facts about what theories are accepted, the process of appraisal, and the prior state of the mosaic. For example, one might note what sort of theories become accepted during a particular time period by some community and try to determine why. If theories become accepted after some novel prediction they make has been confirmed, then the employed method of the time was most likely ''hypothetico-deductive''. On the other hand, if theories do not require confirmed novel predictions to become accepted, then some other method might be the one employed. The most suitable indirect indicators of method employment will vary from case to case with context and culture.  
''Indicators of theory acceptance'' are historical facts that provide evidence indicating that a scientific [[Theory|theory]] was accepted by some [[Scientific Community|community]] at a particular time. The opinions of [[Individual Level|individual scientists]] are not clear indicators of the acceptance of a theory by a community. Possible indicators are sources that typically indicate the opinion of an entire scientific community such as encyclopedias, textbooks, university curricula and the minutes of association meetings. [[CITE_Barseghyan (2015)| pp. 113-117]] Beginning in the eighteenth century, ''encyclopedias'' were a collective undertaking and thus typically good indicators of what was accepted at the time of their publication. However, until recently they were only published sporadically, and so generally can't provide a thorough description of successive states of the mosaic. Modern encyclopedias are a collective undertaking. Before the eighteenth century they were written by either a single author, or an isolated small group. In such cases they may contain theories championed by the author but not necessarily accepted by the community. ''Textbooks'' are typically written with the objective of presenting the current state of knowledge in their field and thus often a good gauge of accepted thinking. But because they are often written by a single or small number of authors, they should be treated with caution. ''University curricula'' similarly typically have the goal of exposing students to an accepted body of knowledge in a field. However, theories that are not considered the best available theory are sometimes nonetheless taught. Classical physics is taught to modern physics students not because it is deemed the best available description of its subject matter but because it is useful for many practical applications and is simpler and easier to understand than the more advanced treatments using quantum physics or general relativity theory. Items can also sometimes be included in a curriculum out of historical interest rather than current value. ''Minutes of association meetings'' can also sometimes be indicative of the stance of a community towards a particular theory. However, minutes can often provide only a fragmentary indication of what was accepted and what was not. No indicators of theory acceptance are universal or conclusive. Indicators are ''contextual'' to their time and cultural circumstances.  
According to Patton, there is such a things as an individual epistemic agents, capable of taking [[Epistemic Stance|epistemic stances]] towards [[Epistemic Element|epistemic elements]].[[CITE_Patton (2019)]]  +
According to Patton, individuals are "capable of taking epistemic stances towards epistemic elements, with reason, based on a semantic understanding of the elements and their available alternatives, and with the goal of producing knowledge".[[CITE_Patton (2019)|p. 82]]  +
The notion of epistemic agency implies that an agent takes epistemic stances ''intentionally''. That is: * the agent has a semantic understanding of the propositions that constitute the epistemic element in question, and of its alternatives, and * the agent is capable of choosing among them with reason, and with the goal of acquiring knowledge. It is clear that a typical individual human being can satisfy these requirements. The main exceptions are prelinguistic infants, or people with certain neurological conditions that render them incapable of understanding propositions. Besides these absolute constraints, the applicability of the definition may also vary as a matter of degree, since individuals may differ one from another in the depth of their semantic understanding of the epistemic element in question and other contextually relevant epistemic elements. Such differences might be produced, for example, by scientific or professional training. An individual's merits as an epistemic agent will be assessed by others based on whether their claims can satisfy the requirements of the [[Method|method]] employed by those others. The issues raised by norms of epistemic merit are best understood in terms of the concept of [[Authority Delegation|authority delegation]].  +
By the ''individual level'' Barseghyan means an "individual scientist who has her own set of ideas and beliefs about the world, and employs certain methods which might be different than the accepted methods of the time".[[CITE_Barseghyan (2015)|p. 43]]  +
The category is agent-relative and encompasses that knowledge which cannot - even in principle - be explicated. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +
The following inferences can be drawn regarding theory assessment outcomes from the acceptance or unacceptance of two contender theories: [[File:Inferring Theory Assessment Outcomes from Acceptance Unacceptance of Two Contenders (Patton-Overgaard-Barseghyan-2017).png|790px|center||]]  +
The following inferences can be drawn regarding theory assessment outcomes from the acceptance or unacceptance of a single contender theory:  +
L
Allen explains that to say that a local epistemic action is available to an epistemic agent amounts to saying that the agent employs the norm that such an action is permissible/desirable/obligatory/etc.[[CITE_Allen (2023)|p. 79]]  +
Allen's theorem is based on the definition of ''local action availability'' that states that "the availability of local epistemic action ''A'' to an agent amounts to the employment of the norm that says “Epistemic action ''A'' is permissible/obligatory/desirable/etc.”".[[CITE_Allen (2023)|p. 79]] Thus, from the law of norm employment, one can argue that:[[CITE_Allen (2023)|p. 81]] <blockquote> For an action ''A'' to become available to an agent, the agent must employ at least one norm and accept one theory from which the norm “Action ''A'' is permissible/desirable” deductively follows. </blockquote> Thus, it follows from the law of norm employment that a local epistemic action becomes available to an agent only when its permissibility/desirability is derivable from a non-empty subset of other elements of the agent’s mosaic.  +
Allen shows that the theorem is a deductive consequence of the ''law of norm employment'' and the definition of ''local action availability''. [[File:Local Action Availability theorem Deduction (Allen-2023).png|662px|center||]]  +
Allen suggests that local actions are the ones that are not universally available but are specific to a time period and/or an agent. For example "such epistemic actions as simulating, experimenting, or modelling seem to be local actions since they need not necessarily be part of the repertoire of epistemic actions of all conceivable epistemic agents; such local actions emerge at a certain time and become available to some but not all epistemic agents".[[CITE_Allen (2023)|p. 79]]  +
Allen points out that many epistemic actions are local (e.g. actions of simulating, experimenting, or modelling).[[CITE_Allen (2023)|p. 79]]  +
Allen argues that many types of epistemic actions are local (e.g. simulating, modeling, experimenting).[[CITE_Allen (2023)|p. 79]]  +
TODO: Nikki add a description  +
According to Barseghyan and Levesley, question can have logical presuppositions.  +
M
The second law explained by Gregory Rupik  +
Any ''method'' is essentially a set of criteria which can become [[Employed Method|employed]] in theory evaluation. Different methods may have different Methods can be very general and apply to theories of a variety of types (e.g. ''the hypothetico-deductive method''), or very specific (e.g. ''the double-blind trial method'' of drug testing). [[CITE_Barseghyan (2015)|pp. 4-5]] Methods of theory evaluation should be differentiated from ''research techniques'', which are used in theory construction and data gathering.[[CITE_Barseghyan (2015)|p. 5]]  +
This definition of ''method'' is meant to encompass the criteria of evaluation of all types, regardless of their being explicit or implicit, and thus merge what was previously separated into two classes of elements - [[Method (Barseghyan-2015)|methods]] and [[Methodology (Sebastien-2016)|methodologies]].  +
According to this definition, in a hierarchy of methods more stringent requirements take precedence over less stringent requirement. Thus, when there are theories satisfying the most stringent requirements, these theories become accepted. However, when no such theory is available, the epistemic agent in question seeks for theories that satisfy the requirements of the second - less stringent - method in the hierarchy. If such a theory is not found, the agent is then prepared to accept theories that satisfy even the even less stringent requirements of the third method in hierarchy, and so on.[[CITE_Mercuri and Barseghyan (2019)]]  +
As argued by [[Mathew Mercuri|Mercuri]] and [[Hakob Barseghyan|Barseghyan]], it is often the case that "criteria employed by the same epistemic agent constitute a certain preference hierarchy",[[CITE_Mercuri and Barseghyan (2019)|p. 45]] illustrated among other things by the fact that "practitioners in different fields customarily speak of more or less reliable evidence".[[CITE_Mercuri and Barseghyan (2019)|p. 46]] For example, when the community of art historians attempts to establish the authenticity of a certain work of art, they often accept the position of the expert they find most reliable; if, for whatever reason, this expert doesn't have a position on the authenticity of that work of art, the community refers to their second-best expert, and so on.[[CITE_Loiselle (2017)]] Another example of method hierarchies comes from the field of clinical epidemiology that features "a variety of different requirements – from more stringent to more lenient".[[CITE_Mercuri and Barseghyan (2019)|p. 58]] Thus, when the requirements of the randomized controlled trial method are met, the results of the study become accepted. If however, when no studies meet these requirements, clinical epidemiologists often accept the results of studies that satisfy less stringent requirements. Mercuri and Barseghyan discuss a number of such cases in their [[Mercuri and Barseghyan (2019)|''Method Hierarchies in Clinical Epidemiology'']].[[CITE_Mercuri and Barseghyan (2019)]]  +
According to Barseghyan's 2018 redrafted ontology, methods are a species of normative theories.[[CITE_Barseghyan (2018)]]  +
According to ''the method rejection theorem'', a [[Method|method]] ceases to be employed only when other methods that are incompatible with it become employed.  +
By [[The First Law (Barseghyan-2015)|the first law]] for methods, an employed method will remain employed until it is replaced by other methods. By [[Compatibility Corollary (Fraser-Sarwar-2018)|the compatibility corollary]], the elements of the scientific mosaic are compatible with each other at any moment of time. Thus, a method can only become rejected when it is replaced by an incompatible method or methods.[[CITE_Barseghyan (2015)|pp. 172-176]] [[CITE_Fraser and Sarwar (2018)|pp. 72-74]] [[File:Method Rejection Theorem deduction (Barseghyan-Fraser-Sarwar-2018).png|627px|center||]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
This definition of the term confines it to the respective normative ''field'' of inquiry. Specific methodological ''theories'' are referred to as [[Method (Barseghyan-2018)|''methods'']].  +
A methodology can affect an employed method when it implements one or more abstract requirements of another employed method. Thus, the role normative methodology plays in the process of scientific change is a creative role, in which methods are changed through the implementation of other abstract requirements from some other employed method. This theorem follows from [[The Third Law (Barseghyan-2015)|former description of the third law]], which states that a method becomes employed only when it is deducible from other employed methods and accepted theories of the time. [[File:Methodology-shapes-method.jpg|607px|center||]] This description of the third law leaves room for methodologies’ to play an active role in scientific change in cases when a ''concrete'' method fulfills the requirements of an employed ''abstract'' method. The same abstract requirements can usually be implemented in a wide range of different ways. For instance, if there is a whole array of concrete cell counting methods all implementing the same abstract requirement that when counting the number of cells, the resulting value is acceptable only if it is obtained with an "aided" eye.[[CITE_Barseghyan (2015)|pp. 151-152]] In such cases, methodology can play a decisive role in method employment; what later becomes the requirements of the employed method can be first suggested as a methodology. Thus, the double-blind trial method was first devised as a methodology, as a set of explicitly stated rules, and only after that did it become actually employed as a method of drug testing.[[CITE_Barseghyan (2015)|pp. 240-243]] [[Sebastien (2016)]]'s new definition of [[Methodology (Sebastien-2016)|methodology]] offers an alternate means for methodologies to shape methods, although this is not stated in the existing formulation of the theory. Because methodology is understood as a subkind of [[Normative Theory]], it should be possible for the [[Third Law]] to deduce an abstract method from a set of theories including some of the normative methodologies a community holds about their method. In this way, it would be necessary for this method to take into account how the community believes its method works, in any concrete implementation of said method, just as a community takes into account descriptive theories (e.g. the placebo effect and experimenter's bias) when [[Mechanism of Method Employment|employing a new method]].  
The workshop discussion of this modification (25.02.2023)  +
The workshop discussion of this modification (25.02.2023)  +
The workshop discussion of this modification  +
The workshop discussion of this modification  +
The workshop discussion of this modification  +
The workshop discussion of this modification (25.02.2023)  +
The workshop discussion of this modification  +
The workshop discussion of this modification  +
The workshop discussion of this modification  +
According to Barseghyan (2015), for mosaics to merge, that is, to "turn into one united mosaic," there must first exist (at least) two distinct mosaics. This necessarily means that there are elements which are present in one mosaic but absent in the other. "To use the language of set theory," Barseghyan writes, "these are the elements that constitute the so-called ''symmetric difference'' of two mosaics [...] Therefore, in order for the two mosaics to merge into one, these elements should either be rejected in both or accepted in both, so that the differences between the two are resolved".[[CITE_Barseghyan (2015)|p. 214]] [[File:Symmetric_Difference.png|527px|center||]]  +
According to Barseghyan (2015), "the acceptance of the Newtonian theory led to a mosaic merge". Specifically, "it led to the merging of the Dutch and Swedish mosaics into a unified mosaic with the Newtonian natural philosophy and Protestant theology":[[CITE_Barseghyan (2015)|p. 215]] <blockquote>It is well known that, on most of the Continent, the Newtonian theory (together with its 18th century modifications) became accepted only after the confirmation of one of its novel predictions. Although, according to popular narratives, the theory was confirmed only in 1758 after the return of Halley’s comet, it is safe to say that it was actually confirmed in the period between 1735 and 1740 during the observations of the Earth’s shape. The story goes like this. In 1735, the accepted natural philosophy on most of the continent was the updated version of the Cartesian theory, which assumed that the Earth must be slightly elongated at the poles. The assumption that the Earth is a pro-late spheroid was also in accord with the results of the geodesic measurements of Giovanni Domenico Cassini and his son Jacques Cassini announced in 1718. Initially, however, the Earth’s prolateness wasn’t a consequence of the Cartesian natural philosophy. When Jacques Cassini announced his results, the accepted theory of gravity was a version of Descartes’s vortex theory modified by Huygens. According to Huygens’s theory, the equilibrium state of any homogenous fluid mass, subject to aethereal pressure, was not prolate but oblate spheroid. Thus, in 1718, the prolateness of the Earth announced by Cassini was an anomaly for the accepted Cartesian natural philosophy. In the period between 1720 and 1734 several attempts were made to reconcile the results of Cassinis’ measurements with the accepted theory of Huygens. There is no unanimity among the historians as to which reconciliation became actually accepted. On Barseghyan's reckoning, it was the very first reconciliation provided by Mairan in 1720, which absorbed the anomaly by stipulating the Earth’s primitive prolateness.[[CITE_Terrall (1992)|p. 212]] In any case, we know for sure that by 1735 the prolate-spheroid Earth was already part of the accepted version of the Cartesian natural philosophy. As for the Newtonian theory (which was a contender at that time), it was predicting that the Earth is slightly flattened at the poles, i.e. that the Earth is an oblate spheroid. In order to end the controversy, the French Académie des Sciences organized two expeditions to Peru (1735-1740) and to Lapland (1736-1737). The latter expedition led by Maupertuis who was accompanied, among others, by Swedish astronomer Anders Celsius, returned to Paris in the summer of 1737. Its results showed that the prediction of Newton’s theory was correct.[[CITE_Terrall (2003)]] This conclusion was also confirmed by Jacques Cassini’s son César-François Cassini de Thury who re-measured the Paris-Perpignan meridian in 1740.[[CITE_Terrall (1992)|p. 234]] As a result, the Newtonian theory replaced the Cartesian theory in all the mosaics where the latter was accepted. In particular, this resulted in the merging of all protestant mosaics where the Newtonian theory became accepted.[[CITE_Barseghyan (2015)|pp. 215-216]]</blockquote>  
To understand what is meant by mosaic split, consider the following case. "A community initially accepts some theories and employs some methods; in other words, initially, there is one mosaic of theories and methods. Also, as a result of some events, this initially united community transforms into two different communities with two somewhat different mosaics of theories and methods."[[CITE_Barseghyan (2015)|p.202]] This is different than mere disagreement. ''Mosaic split'' only occurs if there are two communities that each present their different theories as accepted (in contexts like articles, conferences, textbooks and so on). That is, there is disagreement concerning the ''status'' of certain theories, and not just on the theories themselves.[[CITE_Barseghyan (2015)|p.203]] There are several possible scenarios for ''mosaic split'' to occur. As per Barseghyan (2015), here are the possibilities: "a mosaic can split when the requirements of the current method are simultaneously satisfied by two or more competing theories. On the other hand, a mosaic can split when the outcome of theory assessment is inconclusive. While in the former case a mosaic split takes place ''necessarily'', in the latter case it is merely ''possible''."[[CITE_Barseghyan (2015)|p.203]] The derivation from these scenarios to resulting theorems about mosaic split can be found respectively on the [[Necessary Mosaic Split theorem (Barseghyan-2015)]] and [[Possible Mosaic Split theorem (Barseghyan-2015)]] pages.  +
A quick example of ''mosaic split'' is formulated by Barseghyan (2015) as follows. <blockquote>Take for instance the famous early 18th century case of Newtonianism in Britain vs. Cartesianism in France. If we were to go back to the 1730s we would spot at least two distinct scientific communities, with their distinct mosaics. While the curricula of the British universities included the Newtonian natural philosophy, the French universities taught the Cartesian natural philosophy among other things. In short, there is an instance of mosaic split if and only if there are two or more parties that take different theories to be accepted.[[CITE_Barseghyan (2015)|p.203]]</blockquote> Therefore, mosaic split is not synonymous with regular scientific disagreement.  +
Barseghyan (2015) illustrates the distinction here succinctly: <blockquote>Two physicists or even two groups of physicists may disagree on one topic or another. Yet, as long as they take the same theories as accepted ones, there is a regular scientific disagreement. Suppose, for instance, there are two groups of quantum physicists which subscribe to two different quantum theories – say, the so-called Many Worlds theory and GRW theory respectively. Suppose also that the two groups understand that the currently accepted theory is the orthodox quantum mechanics. Consequently, in their university lectures both groups present the orthodox theory as the currently accepted one. Here we have a typical example of scientific disagreement. The members of the two groups may even tell their students that they personally believe there is a better theory available. But as long as they stress that their personal favourite theory is not the currently accepted one, we deal with an instance of regular scientific disagreement.[[CITE_Barseghyan (2015)|p.202-3]]</blockquote> Barseghyan also provides a more historically grounded example in the context of the physics community: <blockquote>Imagine a group of physicists circa 1918 who considered general relativity as the best available description of its domain. This view was in disagreement with the position of the vast majority of scientists who believed in the then-accepted version of the Newtonian theory. Yet there was no mosaic split, since both the Newtonians and Einsteinians clearly realised which theory was accepted and which theory was merely a contender. Take Eddington, for instance, who was in that small group of early adherents of general relativity. He had no illusions regarding the status of general relativity, for he knew perfectly well that it wasn’t the accepted theory.[[CITE_Barseghyan (2015)|p.203]]<blockquote>  +
Multiple authority delegation is a sub-type of authority delegation. It describes a situation in which a community delegates authority over some topic to multiple communities. For example, in a case where a community might delegate authority to expert A OR expert B over topic X, we have a case of multiple authority delegation. Multiple authority delegation can be further divided into two categories: [[Hierarchical_Authority_Delegation_(Loiselle-2017)|hierarchical]] and [[Non-Hierarchical_Authority_Delegation_(Loiselle-2017)|non-hierarchical]].  +
The definition tweaks the [[Multiple Authority Delegation (Loiselle-2017)|original definition]] of the term by [[Mirka Loiselle|Loiselle]] to ensure that the relationship of multiple authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
Overgaard and Loiselle illustrate the relationship of mutual authority delegation by a number of examples. For one, physicists acknowledge that biologists are the experts on questions concerning life, and likewise biologists acknowledge that physicists are the experts on questions concerning physical processes. Similar relationships can be found within individual scientific disciplines. Consider, for instance, the relationship between theoretical and applied physicists, where despite the differences in their methods and overall objectives, the two communities customarily delegate authority to each other on a wide array of topics.  +
The definition tweaks the [[Mutual Authority Delegation (Overgaard-Loiselle-2016)|original definition]] of the term by [[Nicholas Overgaard|Overgaard]] and [[Mirka Loiselle|Loiselle]] to ensure that the relationship of multiple authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used. Overgaard and Loiselle illustrate the relationship of mutual authority delegation by a number of examples. For one, physicists acknowledge that biologists are the experts on questions concerning life, and likewise biologists acknowledge that physicists are the experts on questions concerning physical processes. Similar relationships can be found within individual scientific disciplines. Consider, for instance, the relationship between theoretical and applied physicists, where despite the differences in their methods and overall objectives, the two communities customarily delegate authority to each other on a wide array of topics.  +
N
According to the [[Non-Empty Mosaic theorem (Barseghyan-2015)|non-empty mosaic theorem]], there must be at least one element present in a mosaic. The Necessary Method theorem specifies that this element must be a method. That is, "one method is a must for the whole enterprise of scientific change to take off the ground".[[CITE_Barseghyan (2015)|p. 228]] What would this method be? As per Barseghyan (2015): <blockquote> This necessary method cannot be [[Substantive Method|substantive]]. Since a substantive method is necessarily based on at least one contingent proposition, it is not a necessary element of any mosaic. Indeed, any substantive method can become employed after the acceptance of those contingent propositions on which it is based. Of course, in some mosaics, substantive methods can also be present from the outset. Moreover, it is quite likely that even the earliest of mosaics tacitly contained some primitive substantive methods (e.g. “trust your senses”, or “trust the chieftain”). Yet, the key theoretical point is that no substantive method is necessarily part of any mosaic, for a substantive method can become employed after the acceptance of the theories on which it is based. Therefore, the necessary method is not substantive, but [[Procedural Method|procedural]], i.e. it doesn’t presuppose any contingent propositions. But it is a procedural method of a very special kind in that it cannot presuppose any propositions whatsoever: "the method that is necessarily present in any mosaic is not based on any propositions".[[CITE_Barseghyan (2015)|p. 230]] <blockquote> In other words, it must be the most abstract of all methods. Any concrete method is an implementation of a more abstract method. Any concrete method is a logical consequence of the conjunction of some accepted theories and that abstract method (by the third law). Thus, a concrete method can become employed after the acceptance of the propositions on which it is based. Therefore, what we are looking for is the most abstract of all possible requirements. We have come across that requirement on many occasions: the most abstract requirement ''to accept only the best available theories''. This basic requirement is the most abstract of all, for it does not presuppose any other methods or theories. It is not surprising given that this abstract method is only a restatement of the definition of [[Theory Acceptance|acceptance]]: this abstract method basically says that a theory is acceptable when it is the best available description of its object. But since this abstract requirement isn’t based on any theories, it cannot become accepted; it must be built into any mosaic from the outset. As vague and unrestricting as this method is, it nevertheless performs two very important functions. First, it indicates the main goal of the whole scientific enterprise – the acquisition of best available descriptions. Second, being a link between accepted theories and more concrete methods, it allows us to modify our methods as we learn new things about the world, i.e. it allows for concrete methods to become employed as we accept new theories. In short, it is this abstract requirement that makes the process of scientific change possible.[[CITE_Barseghyan (2015)|pp. 230-231]]<blockquote> That is, any other method can be conceived as a deductive consequence of the conjunction of this abstract method and some accepted theories: [[File:All_employed_methods_derive_from_the_most_abstract_requirement.png|529px|center||]]  
Barseghyan's explanation of the deduction is as follows:[[CITE_Barseghyan (2015)|pp. 227-228]] <blockquote>By the [[Non-Empty Mosaic theorem (Barseghyan-2015)|Non-Empty Mosaic theorem]], any mosaic contains at least one element, which is either a theory or a method. But which one is it: is it a theory or is it a method? It is easy to see that if this necessary element of the mosaic were a theory, the process of scientific change would never begin in the first place. Suppose there is a community that accepts only one belief and employs no method whatsoever; this community has no expectations whatsoever. It is obvious that the mosaic of this community will never acquire another element. On the one hand, in order for new theories to become accepted into the mosaic, the mosaic must contain at least one method (by the second law). On the other hand, in order for the mosaic to acquire a new method, there must be not only accepted theories, but also at least one other employed method (by the third law). Indeed, if we recall the historical examples of the third law that we have discussed, we will see that new methods become employed when they are deductive consequences of accepted theories and at least one other employed method. Thus, the necessary (indispensable) element cannot be a theory – it must be a method.</blockquote> [[File:Necessary Method theorem Deduction.png|553px|center||]]  +
The requirement of 'testability', according to which a scientific theory must be empirically testable, is often portrayed as "one of the prerequisites of science" though it is by no means a necessary element in any mosaic. Barseghyan (2015) develops the case study as follows: <blockquote>The explanation is simple: the requirement of testability is ''substantive'' and, therefore, we can easily conceive of a mosaic where it is not present. It is substantive for it is based, among other things, on such a non-trivial assumption as “observations and experiments are a trustworthy source of knowledge about the world”. Thus, the requirement is not a necessarily a part of any mosaic; it can become employed after the acceptance of the assumptions on which it is based. The historical record confirms this conclusion. It is well known that testability hasn’t always been among the implicit requirements of the scientific community. For example, it played virtually no role in the Aristotelian-medieval mosaic.[[CITE_Barseghyan(2015)|p. 139]] The same holds for any substantive method. For instance, the oft-cited requirement of repeatability of experiments is evidently part of our current mosaic, but not of every possible mosaic. Similarly, the requirement to avoid supernatural explanations is implicit in our contemporary mosaic, but it is not a necessary part of any mosaic.[[CITE_Barseghyan(2015)|p. 229]]</blockquote>  +
To better illustrate this theorem, we can imagine a community with a set of accepted propositions. Community φ accepts proposition α. For α to have become accepted, through the second law, we know that φ must have had implicit expectations which α satisfied. No matter what those expectations are, if the community had not harbored those expectations there could be no acceptance. Similarly, if we have a community φ which experiences a change of expectations (i.e. a change of method), it is deductively true that φ already had a set of expectations which could be referred to as a method.  +
To show why a necessary method must ''not'' presuppose any necessary propositions, we will consider a procedural method that does presuppose some necessary propositions: <blockquote>Let it be the prescription that “if a proposition is deductively inferred from other accepted propositions, it must also be accepted”. As we know, this abstract method of deductive acceptance is procedural, as it is based on the definition of deductive logical inference.420 Now, it is obvious that this procedural method can become employed after the acceptance of the proposition on which it is based. Therefore, this procedural method is not necessarily part of any possible mosaic. The same applies to any procedural method that presupposes at least one necessary proposition. Such methods aren’t necessarily present in any mosaic, for they can be employed after the acceptance of the necessary propositions on which they are based.[[CITE_Barseghyan (2015)|pp. 229-230]]</blockquote> [[File:Procedural_Methods_Can_Presuppose_Necessary_Propositions.png|550px|center||]]  +
Most theories can satisfy the abstract requirement of the necessary method. Barseghyan (2015) outlines the following example: <blockquote>Imagine a community with no initial beliefs whatsoever trying to learn something about the world. In other words, the only initial element of their mosaic is the abstract requirement to accept only the best available theories. Now, suppose they come up with all sorts of hypotheses about the world. Since their method is as inconclusive as it gets, chances are many of the hypotheses will simultaneously “meet their expectations”. In such circumstances, different parties will most likely end up accepting different theories, i.e. multiple mosaic splits are virtually inevitable. For example, while some may come to believe that our eyes are trustworthy, others may accept that intuitions (or gut feelings) are the only trustworthy source of knowledge. As a result, the two parties will employ different concrete methods (by the third law) and will end up with essentially different mosaics.[[CITE_Barseghyan (2015)|pp. 231-232]] [[File:Theory Satisfying Abstract Requirement Mosaic 1.png|497px|center||]] [[File:Theory Satisfying Abstract Requirement Mosaic 2.png|497px|center||]] <blockquote>These examples are not altogether fictitious. It is possible that something along these lines happened in ancient Greece, where some schools of philosophy accepted that the senses are, by and large, trustworthy, while other schools held that the senses are unreliable and that the only source of certain knowledge is divine insight (intuition). Thus, the historical fact of the existence of diverse mosaics in the times of Plato and Aristotle shouldn’t come as a surprise. As a result, at early stages, multiple mosaic splits are quite likely.[[CITE_Barseghyan (2015)|pp. 233]]</blockquote>  +
Necessary [[Scientific Mosaic|mosaic]] split is a form of mosaic split that must happen if it is ever the case that two incompatible [[Theory|theories]] both become accepted under the employed [[Method|method]] of the time. Since the theories are incompatible, under the [[The Zeroth Law|zeroth law]], they cannot be accepted into the same mosaic, and a mosaic split must then occur, as a matter of logical necessity.[[CITE_Barseghyan (2015)|pp. 204-207]] The necessary mosaic split theorem is thus required to escape the contradiction entailed by the acceptance of two or more incompatible theories. In a situation where this sort of contradiction obtains the mosaic is split and distinct communities are formed each of which bears its own mosaic, and each mosaic will include exactly one of the theories being assessed. By the [[The Third Law|third law]], each mosaic will also have a distinct method that precludes the acceptance of the other contender theory.  +
The necessary mosaic split theorem follows as a deductive consequence of the [[The Second Law|second law]] and the zeroth law. Per the zeroth law, two incompatible elements cannot simultaneously remain in a mosaic, and per the second law any theory that satisfies the method of the time (and the assessment of the theory by the method is not inconclusive) is accepted into the mosaic. This creates the apparently contradictory situation where either of the two theories A) must be accepted because it satisfies the employed method and B) must not be accepted because it in not compatible with another accepted theory. [[File:Necessary-mosaic-split.jpg|607px|center||]]  +
Barseghyan illustrates the necessary mosaic split theorem with the example of the French and English physics communities circa 1730, at which time the French accepted the Cartesian physics and the English accepted the Newtonian physics.[[CITE_Barseghyan (2015)|p. 203]] These communities would both initially accepted the Aristotelian-medieval physics due to their mutual acceptance of the Aristotelian-medieval mosaic until the start of the eighteenth century[[CITE_Barseghyan (2015)|p. 210]] but clearly had different mosaics within a few decades. According to the second law both the Cartesian and Newtonian physics must have satisfied the methods of the Aristotelian-medieval mosaic in order to have been accepted, but since both shared the same object and posited radically different ontologies they were incompatible with one another and could not both be accepted, per the second law. The necessary result was that the unified Aristotelian-medieval community split and the resulting French and English communities emerged, each with a distinct mosaic.  +
Suppose we have some community C' with mosaic M' and that this community assesses two theories, T<sub>1</sub> and T<sub>2</sub>, both of which satisfy M'. Let us further suppose that T<sub>1</sub> and T<sub>2</sub> both describe the same object and are incompatible with one another. According to the second law both T<sub>1</sub> and T<sub>2</sub> will be accepted because they both satisfy M', but both cannot simultaneously be accepted by C' due to the zeroth law. The necessary mosaic split theorem says that the result will be a new community C<sub>1</sub> which accepts T<sub>1</sub> and M<sub>1</sub>, which precludes their accepting T<sub>2</sub>. Simultaneously a new community C<sub>2</sub> will emerge which accepts T<sub>2</sub> and the resulting theory M<sub>2</sub>, which precludes their accepting T<sub>1</sub>. Barseghyan (2015) neatly summarizes this series of events: <blockquote> When two mutually incompatible theories simultaneously satisfy the implicit requirements of the scientific community, members of the community are basically in a position to pick either one. And given that any contender theory always has its champions (if only the authors), there will inevitably be two parties with their different preferences. As a result, the community must inevitably split in two.[[CITE_Barseghyan (2015)|p. 204]]<blockquote>  +
Hakob Barseghyan's lecture on Newtonian Worldview  +
The non-empty [[Scientific Mosaic|mosaic]] theorem asserts that in order for a process of [[Scientific Change|scientific change]] to be possible, the mosaic must necessarily contain at least one element. Scientific change is impossible in an empty mosaic. It can be deduced from the [[The Second Law (Barseghyan-2015)|second law]], which asserts that in order to become accepted into the mosaic, a [[Theory|theory]] is assessed by the [[Method|method]] actually employed at the time, and the [[The Third Law (Barseghyan-2015)|third law]], which asserts that a method becomes employed only when it is deducible from other employed methods and accepted theories of the time.[[CITE_Barseghyan (2015)|p. 226]]  +
Scientific change is impossible in an empty mosaic. It can be deduced from the second law, which asserts that in order to become accepted into the mosaic, a theory is assessed by the method actually employed at the time, and the third law, which asserts that a method becomes employed only when it is deducible from other employed methods and accepted theories of the time.[[CITE_Barseghyan (2015)|p. 226]] [[File:Non-empty-mosaic-theorem.jpg|608px|center||]]  +
This definition is meant to highlight the key difference between [[Epistemic Community|epistemic]] and non-epistemic communities. The former are said to have a collective intentionality to know the world, while the latter lack such an intentionality. A typical example of a non-epistemic community, according to [[Nicholas Overgaard|Overgaard]], is an orchestra that has a collective intentionality to play music but lack the intentionality of knowing the world.[[CITE_Overgaard (2017)|p. 59]] Another example of a non-epistemic community, according to Overgaard, is a political party. While a political party might have some accepted theories, such as ideas concerning, for instance, effective governance, "a political party would be considered a non-epistemic community because it lacks a collective intentionality to know the world".[[CITE_Overgaard (2017)|p. 59]]  +
Non-hierarchical authority delegation is a sub-type of multiple authority delegation. It describes a situation in which a community delegates authority over some topic to multiple communities, and treat each community as being at the same level of authority. Consider a case of multiple authority delegation in which either expert A OR expert B might be consulted. If the word of expert A is valued as equally as the word of expert B, we have a case of non-hierarchical authority delegation. At the moment, non-hierarchical authority delegation is only a theoretical possibility. No historical examples have been found.  +
The definition tweaks the [[Non-hierarchical Authority Delegation (Loiselle-2017)|original definition]] of the term by [[Mirka Loiselle|Loiselle]] to ensure that the relationship of non-hierarchical authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
This definition is meant to ensure that the notion of employment is applicable not only to methods but to norms of all types, as is the case in the ontology of epistemic elements suggested by [[Hakob Barseghyan|Barseghyan]] in 2018. According to that ontology, the capacity of being employed can be ascribed not only to norms of theory evaluation (i.e. methods), but to [[Epistemic Stances Towards Normative Theories - Norm Employment (Barseghyan-2018)|norms of all types]], including ethical norm and aesthetic norms.[[CITE_Barseghyan (2018)]]  +
Norm employment explained by Barseghyan  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Whereas [[Implication (Palider-2019)]] is an analytic relation between theories, inferences are taken to be the "movements of thought" that lead to belief revision. As argued for by Palider (2019) inferences, unlike implications, necessarily involve a normative component.[[CITE_Palider (2019)|p. 22]] An implication alone is insufficient for an agent to revise their beliefs, or accepted theories, what is needed is that the agent take the normative stance that they should accept the theory. A '''normative inference''' is what leads an agent to taking such a normative stance. Palider (2019) separates normative inferences into three components. The first two being the acceptance of a theory and the acceptance that the theory implies another theory. The third condition states that the normative statement that one should accept the implied theory follows from some set of norms, alongside the implication, and initial theory. As an illustration, if one has the norm that they should accept the findings of double-blind trials when it concerns the efficacy of drugs, and that acetaminophen succeeds at a double-blind trial for treating headaches, then one can ''normatively infer'' that they should accept acetaminophen's efficacy against headaches. Note however, that unlike a [[Sufficient Reason (Palider-2019)]], normative inferences do not involve the employment of the norms (or method). One can speak about theories normatively inferring others without committing to (i.e. employing) the norms in question. This allows for hypothetical discussions of what one would be committed to inferring if they employed a certain a norm, without actually employing that norm.  +
While not explicitly stated, the definition assumes that normative propositions involve evaluation, i.e. they "say how something ''ought'' to be, what's good or bad, what's right or wrong".[[CITE_Barseghyan (2015)|p. 12]] In contrast with [[Descriptive Theory|''descriptive propositions'']], normative propositions do not aim to tell how things are, were, or will be, but rather what is good or bad, desirable or undesirable, permissible or impermissible.  +
According to Sebastien, "normative propositions are relevant to the process of scientific change", i.e. "they "can be part of the scientific mosaic".[[CITE_Sebastien (2016)|p. 2]]  +
According to Sebastien, norms, such as those of ethics, aesthetics, or methodology, are normative theories.[[CITE_Sebastien (2016)]]  +
O
One-sided authority delegation is a sub-type of authority delegation. It describes a situation where one community delegates authority over some topic to another community, but the other community does not delegate any authority back. A good example of one-sided authority delegation is the relationship between contemporary philosophers and physicists. Philosophers themselves are not physicists (though, they certainly can be), meaning they must rely on the theories accepted by physicists to conduct research about, say, the quantum entities that populate the world. As soon as a physicist accepts a new particle (e.g. the Higgs boson), philosophers too will accept the existence of that particle. However, if philosophers for some reason begin to debate the ontological status of that new particle, physicists are unlikely to pay any attention to the philosophers. So, at least in principle, it is possible for one community to delegation authority to another, but for the other to delegate no authority to the first community.  +
The definition tweaks the [[Mutual Authority Delegation (Overgaard-Loiselle-2016)|original definition]] of the term by [[Nicholas Overgaard|Overgaard]] and [[Mirka Loiselle|Loiselle]] to ensure that the relationship of one-sided authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
Paul Patton's overview of the scientonomic ontology  +
To say that the theory acceptance outcome ''accept'' obtained as a result of a theory's assessment by a method is the same as to say that it is prescribed that the theory must be accepted.  +
To say that the theory acceptance outcome ''inconclusive'' obtained as a result of a theory's assessment by a method is the same as to say that the theory ''can'' but ''shouldn't necessarily'' be accepted.  +
To say that a theory's assessment by a method produced the outcome "inconclusive" is the same as to say that the community itself couldn't tell whether the requirements of the method were conclusively met.  +
To say that the theory acceptance outcome ''not accept'' obtained as a result of a theory's assessment by a method is the same as to say that it is prescribed that the theory must not be accepted.  +
To say that a theory's assessment by a method produced the outcome "not satisfied" is the same as to say that the theory conclusively failed to meet the requirements of the method.  +
To say that a theory's assessment by a method produced the outcome "satisfied" is the same as to say that the theory conclusively met the requirements of the method.  +
P
Possible [[Scientific Mosaic|mosaic]] split is a form of mosaic split that can happen if it is ever the case that [[Theory|theory]] assessment reaches an inconclusive result. In this case, a mosaic split can, but need not necessarily, result.[[CITE_Barseghyan (2015)|pp. 208-213]] That is, "the sufficient condition for this second variety of mosaic split is an element of inconclusiveness in the assessment outcome of at least one of the contender theories".[[CITE_Barseghyan (2015)|p. 208]] Barseghyan notes that, "if there have been any actual cases of inconclusive theory assessment, they can be detected only indirectly".[[CITE_Barseghyan (2015)|p. 208]] Therefore: <blockquote>One way of detecting an inconclusive theory assessment is through studying a particular instance of mosaic split. Unlike inconclusiveness, mosaic split is something that is readily detectable. As long as the historical record of a time period is available, it is normally possible to tell whether there was one united mosaic or whether there were several different mosaics. For instance, we are quite confident that in the 17th and 18th centuries there were many differences between the British and French mosaics.[[CITE_Barseghyan (2015)|p. 208]] </blockquote> Thus, the historical examples of mosaic split below also serve as points of detection for historical instances of inconclusive theory assessment.  +
The possible mosaic split theorem follows as a deductive consequence of the second and zeroth laws, given a situation a situation where the assessment of two theories obtains an inconclusive result. This will happen when it is unclear whether or not a theory satisfies the employed method of the community. [[File:Possible-mosaic-split.jpg|607px|center||]]  +
Barseghyan continues providing examples of possible mosaic split, noting that an analysis of the case in which there are two contender theories and not just one is "more illustrative".[[CITE_Barseghyan (2015)|p. 205]] <blockquote>The case with two contender theories is more illustrative. When two contender theories undergo assessment by the current method, each assessment can have three possible outcomes. Therefore, there are nine possible combinations of assessment outcomes overall and, in five of these nine combinations, there is an element of inconclusiveness.[[CITE_Barseghyan (2015)|p. 205]]</blockquote> [[File:Two_Contender_Theories_Possible_Assessment_Outcomes.png|509px|center||]] <blockquote>The actual course of events in the first four combinations is relatively straightforward. If the assessment of one theory yields a conclusive “accept” while the assessment of the other yields a conclusive “not accept”, then, by the second law, the former becomes accepted while the latter remains unaccepted. When the assessments of both theories yield conclusive “not accept”, then both remain unaccepted and the mosaic maintains its current state. Finally, when the assessment yields “accept” for both theories, then both theories become accepted and a mosaic split takes place, as we know from the [[Necessary Mosaic Split theorem (Barseghyan-2015)]] [[CITE_Barseghyan (2015)|p. 205]]</blockquote> [[File:Theory_Assessment_Outcomes_and_Actual_Courses_of_Events.png|581px|center||]] <blockquote>As we can see, in each of these four cases, there is only one necessary course of events. In other words, when the assessment outcomes of both theories are conclusive, the actual course of events is strictly determined by the assessment outcomes. This is not the case with the other five combinations of assessment outcomes. Let us consider them in turn.[[CITE_Barseghyan (2015)|p. 206]]</blockquote> ''Accept/inconclusive'': What can happen when the assessment of one theory yields a conclusive “accept”, while the assessment outcome of the other theory is inconclusive? <blockquote>In such a scenario, the former theory must necessarily become accepted, while the latter may or may not become accepted. Therefore, only two courses of events are possible in this case: it is possible that only the former theory will become accepted and it is also possible that both theories will become simultaneously accepted (i.e. a [[Mosaic Split|mosaic split]] may take place).[[CITE_Barseghyan (2015)|p. 206]]</blockquote> ''Not accept/inconclusive'': What can happen when the assessment of one theory yields a conclusive “not accept”, while the assessment outcome of the other theory is inconclusive? <blockquote>In such an instance, it is impossible for the former theory to become accepted, while the latter may or may not become accepted. Thus, it is possible that both theories will remain unaccepted as well as it is possible that only the latter theory will become accepted. Finally, the [[mosaic split]] is also among the possibilities, since it is conceivable that one part of the community may opt for accepting the latter theory while the other part may prefer to maintain the current state of the mosaic. Disregard for a moment the former theory: it cannot become accepted, since its assessment yields a conclusive “not accept”. With the former theory out of the picture, we are left with the latter theory – the one with an inconclusive assessment outcome. Thus, this case becomes similar to the above-discussed case with only one contender theory: we have a contender with an inconclusive assessment outcome and, consequently, a mosaic split may take place provided that one part of the community decides to opt for the theory while the other part prefers to stick to the existing mosaic. Note that, in this case, a [[Mosaic Split|split]] is not a consequence of the simultaneous acceptance of two mutually incompatible theories.[[CITE_Barseghyan (2015)|pp. 206-207]]</blockquote> ''Inconclusive/inconclusive'': Finally, what can happen when the assessment outcomes of both theories are inconclusive? <blockquote>In such a scenario, both theories may or may not become accepted. Thus, it is possible that none of the theories will become accepted, just as it is possible that only one of the two will become accepted. It is also possible that both theories will become simultaneously accepted and, consequently, a [[Mosaic Split|mosaic split]] will take place.[[CITE_Barseghyan (2015)|p. 207]]</blockquote> Therefore, Barseghyan concludes through this meticulous example that "a mosaic split is possible in those cases where the assessment outcome of at least one contender theory is inconclusive".[[CITE_Barseghyan (2015)|p. 207]] [[File:Five_Cases_of_Possible_Mosaic_Split.png|589px|center||]]  
Suppose we have a method for assessing theories about the efficacy of new pharmaceuticals that says "accept that the pharmaceutical is effective only if a clinically significant result is obtained in a sufficient number of randomized controlled trials." The wording of the method is such that it requires a significant degree of judgement on the part of the community - what constitutes 'clinical significance' and a 'sufficient number' of trials will vary from person to person and by context. This introduces the possibility of mosaic split when it is unclear if two contender theories satisfy this requirement. Carrying on the above example, suppose two drugs are being tested for some condition C: drugs A and B. We'll call T<sub>1</sub> the theory that A is more effective than B at treating condition C and T<sub>2</sub> the theory that B is more effective than A at treating condition C. These two theories are not compatible, and so cannot both be elements of the mosaic according to the [[The Zeroth Law|zeroth law]]. Suppose further that both are assessed by the method of the time, meaning that both are subject to double blind trials. In these trials drug A is clearly superior to drug B at inducing clinical remission, but drug B has fewer side effects and is still more effective than a placebo and has had more studies conducted. Even if we accept T<sub>1</sub> we may have reason to suspect that T<sub>2</sub> better satisfies the method. We can interpret this in two ways: by supposing that our assessment shows that we should accept T<sub>1</sub> and that our assessment is inconclusive about T<sub>2</sub> or by taking both assessments to be inconclusive. In the first case it is permissible according to the [[Second Law|second law]] to accept T<sub>1</sub> and to either accept or reject T<sub>2</sub>, and in the second case both may be accepted or rejected. [[File:Assessment_outcomes_from_two_contenders_resulting_in_mosaic_split.jpg|454px|center||]] Because any time an assessment outcome is [[Outcome Inconclusive|inconclusive]] we may either accept or reject the theory being assessed we always face the possibility that one subsection of the community will reject the theory and another subsection will accept it. In these cases the two communities now bear distinct mosaics and a mosaic split has occurred. However it is important to note that the ambiguity inherent in inconclusive assessments means that it is never entailed that there will be competing subsections of the community. A community may, in the face of an inconclusive assessment, collectively agree to accept or reject the theory being assessed. Thus, in cases with an inconclusive assessment mosaic split is possible but never necessarily entailed by the circumstances.  
Barseghyan's example of mosaic split which ''may'' result from ''one'' contender theory, proceeds as follows. <blockquote> Consider first the case with one contender theory. Suppose there is a scientific mosaic with its theories and methods and there is also a contender theory which becomes assessed by the currently employed method. If the assessment outcome is conclusive “accept”, the theory necessarily becomes accepted. If the outcome is conclusive “not accept”, the theory remains unaccepted. Both of these cases are quite straightforward. But what will happen if the outcome turns out to be inconclusive, i.e. if the assessment by the current method doesn’t provide a definitive prescription? When the assessment outcome is inconclusive, there are three possible courses of events. First, the new theory can remain unaccepted; in that case the mosaic will maintain its current state. Second, the new theory can also become accepted by the whole community; in that case a regular theory change will take place and the new theory will replace the old one. None of these two scenarios is particularly interesting here. However, there is also the third possible course of events. When the outcome of theory assessment is inconclusive, members of the community are free to choose whichever of the two scenarios – they can accept the theory, but they can equally choose to leave it unaccepted. Naturally, there are no guarantees that all of them will necessarily choose the same course of action. It is quite conceivable that some will opt for accepting the new theory, whereas the rest will prefer to keep the old theory. In other words, when the assessment of a contender theory yields an inconclusive outcome, the mosaic may split in two.[[CITE_Barseghyan (2015)|pp. 204-205]]</blockquote> [[File:Assessment_Outcomes_from_One_Contender_Theory.png|498px|center||]]  +
Barseghyan (2015) contrasts the replacement of the Aristotelian-Medieval method with the Newtonian method in Britain and the Cartesian method in France -- a broad case which might seem like an instance of mosaic split, but is not -- with a more specific historical example of potential mosaic split. He outlines that specific historical example, ''the acceptance of the Cartesian natural philosophy in Cambridge circa 1680'', as follows: <blockquote>Let us begin with the available historical data. Prior to the 1680s, the Aristotelian-medieval natural philosophy was taught in schools across Europe, with alternative theories included into the curricula only sporadically. If my understanding is correct, the first university where the Cartesian natural philosophy was accepted and taught on a regular basis was Cambridge. Although the theory had been sporadically taught since the 1660s, it began to be taught systematically only circa 1680.379 Thus, it is not surprising that when one Cambridge professor Isaac Newton was writing his magnum opus, the main target of his criticism was Descartes’s theory, not that of Aristotle. According to the historical data, during the last two decades of the 17th century, Cambridge remained the only university where the Cartesian theory was generally accepted. The situation changed circa 1700, when the Cartesian natural philosophy together with its respective modifications by Huygens, Malebranche and others became accepted in France, Holland and Sweden. As for Oxford, it never accepted the Cartesian theory but switched directly to the Newtonian theory circa 1690. In Cambridge, the transition from the Cartesian natural philosophy to that of Newton took place in the 1700s. Most likely, the universities of the Dutch Republic (Leiden and Utrecht) were the first on the Continent to accept the Newtonian theory by 1720. In France and Sweden, the Newtonian theory replaced the Cartesian natural philosophy circa 1740.386 The picture wouldn’t be complete if we didn’t mention the important theological differences: Catholic theology was accepted in Paris; Anglican theology was accepted in Oxford and Cambridge; in Holland and Sweden the accepted theology was that of Protestantism.[[CITE_Barseghyan (2015)|pp. 211-212]]</blockquote> A draft timeline of the situation, including theological differences, at the end of the 17th century has been constructed by Barseghyan: [[File:Draft_Timeline_1680_Mosaics.png|1216px|center||]] Barseghyan (2015) continues as follows: <blockquote> Although the diagram hardly scratches the surface of the colorful 17-18th century landscape, it points to ''at least two possible candidates of mosaic split''. Apparently, there seem to have been a split in the Anglican mosaic of Britain circa 1680, when the Cartesian natural philosophy became accepted in Cambridge, and also probably in the Protestant mosaic sometime by 1720, when the Newtonian theory became accepted in Holland.[[CITE_Barseghyan (2015)|pp. 212]]</blockquote> Barseghyan (2015) more closely examines the first potential case: the acceptance of Cartesian natural philosophy in Cambridge: <blockquote> If my reading is correct, then this was a typical case of mosaic split: after the acceptance of the Cartesian theory, the mosaic of Cambridge became different from the Aristotelian-Anglican mosaic of other British universities. Note that this mosaic split was caused by the acceptance of only one new theory. Therefore, it could only be a result of an inconclusive theory assessment. At this point, we can only hypothesize as to why exactly the outcome of the assessment of the Cartesian theory was inconclusive. My historical hypothesis is that it had to do with the inconclusiveness of the Aristotelian-medieval method employed at the time, i.e. with the vagueness of the implicit expectations of the community of the time. It is easily seen that the then-employed Aristotelian-medieval method allowed for two distinct scenarios of theory assessment. On the one hand, if a proposition was meant as a theorem, it was only expected to show that it did in fact follow from other accepted propositions. That much would be sufficient for a new theorem to become accepted. This part of the method is straightforward – no ambiguity here. If, on the other hand, a proposition was not meant as a theorem – if it was supposed to be a separate axiom – then it was expected to be intuitively true. But what does it mean to be intuitively true? Nowadays we seem to realize that no proposition can be intuitively true (unless of course it is a tautology) and that intuition, even when “schooled by experience”, is not the best advisor in theory assessment.66 Therefore, a theory could merely appear intuitively true to the community of the time. This was the actual expectation of the scientific community in the 17th century – the appearance of intuitive truth. One indication of this is the fact that both Descartes and Newton understood the vital necessity of presenting their systems in the axiomatic-deductive form. They also made all possible efforts to show that their axioms – the starting points of their deductions – were beyond any reasonable doubt. They both realized that if their theories are ever to be accepted, their axioms must appear clear to anyone who is knowledgeable enough to understand them. But this is exactly what was expected by the scientific community of the time. Yet, the requirement of intuitive truth is extremely vague: what appears intuitively true to me need not necessarily appear intuitively true to others. I think this can explain why the mosaic split of the 1680s took place. The axioms of the Cartesian natural philosophy were meant as self-evident intuitively true propositions. But as with any “intuitive truth”, scientists could easily disagree as to whether the axioms were indeed intuitively true. As a result, the outcome of the assessment of the Cartesian theory was “inconclusive”. In that situation, a mosaic split was one of the possible courses of events (by the possible mosaic split theorem). Of course the mosaic split wasn’t inevitable – it was merely one of the possibilities which actualized. This Aristotelian “bring before me intuitive true propositions” requirement was so vague that theory assessment could easily yield an “inconclusive” outcome and, consequently, result in a mosaic split. It is not surprising, therefore, that the British mosaic did actually split in the 1680s when the Cartesian natural philosophy was accepted only in Cambridge. This was an instance of ''possible mosaic split''.[[CITE_Barseghyan (2015)|pp. 212-213]]</blockquote>  
The definition assumes that it is possible to conceive of methods that do not presuppose any substantive knowledge about the world. If a method doesn't presuppose any accepted theories other than definitions, the method is procedural.[[CITE_Barseghyan (2015)|p. 219]] As a possible example of a procedural method, [[Hakob Barseghyan|Barseghyan]] mentions what he calls the ''deductive acceptance method'', according to which "if a proposition is deductively inferred from other accepted propositions, it is to be accepted".[[CITE_Barseghyan (2015)|p. 221]] This method, according to Barseghyan presupposes only some definition of ''deductive inference'' as well as some very abstract method such as "only accept the best available theories".[[CITE_Barseghyan (2015)|p. 220-221]] The latter is another possible instance of a procedural method, as it too doesn't seem to presuppose any substantive knowledge of the world.  +
In the scientonomic workflow, the discussions concerning suggested modifications should be published once a communal consensus is reached and the respective verdict is recorded in the encyclopedia. The discussions are to be published in the journal as special commentary articles co-authored by all participants of the discussion or in special edited collections. While it might be tempting to only publish those discussions that caused significant disagreement in the community, such an approach alternative solution may inadvertently incentivize dissent and disagreement for the sake of getting published. In contrast, by publishing ''all'' discussions, we incentivize all commenting without skewing the incentive towards disagreement.  +
As a distinct epistemic stance, [[Theory Pursuit|theory pursuit]] is not reducible to [[Theory Acceptance|acceptance]].  +
Q
A ''question'' is a subject or area of inquiry into which epistemic communities can investigate. They are usually given in the form of an interrogative. Questions vary in their specificity and scope, from very wide (what are the properties of the universe?) to very narrow (why are there no instances of CP-violation observed in quantum chromodynamics?). Another term for ''question'' is ''topic''.  +
''Question Acceptance'' refers to one of the two stances that [[Epistemic Community|epistemic communities]] can take towards [[Question (Rawleigh-2018)|questions]], with the opposite stance being ''unacceptance''. A question is said to be accepted by an epistemic community if and only if said epistemic community takes the question to be a legitimate topic of inquiry.  +
Rawleigh emphasized that the process of scientific change involves not only theories and methods but also questions.[[CITE_Rawleigh (2018)]]  +
A question can be a subquestion of another question. A question ''Q'' is a subquestion of another question ''P'', if a direct answer to ''Q'' is also a partial answer to ''P''.  +
Rawleigh argued that questions are an integral part of the process of scientific change.[[CITE_Rawleigh (2018)]]  +
A study of the process of scientific change reveals many cases when a question that was considered legitimate in a certain time-period became illegitimate in another period. For example, the questions such as “what is the weight of phlogiston?” or “why does some matter gain mass as it loses phlogiston?” were accepted as legitimate topics of inquiry for the most part of the 18th century. Yet, once the phlogiston theory was rejected, these questions became illegitimate. Another examples is the question “what is the distance from the earth to the sphere of stars?” that was once considered legitimate by astronomers, but is no longer accepted.[[CITE_Rawleigh (2018)|p. 4]] Similarly, there are questions which are considered legitimate these days but weren't accepted even a few centuries ago. An example of this is the question “what’s the underlying mechanics of the evolution of species?” - a perfectly legitimate topic of biological research nowadays that would have been deemed illegitimate three hundred years ago.[[CITE_Rawleigh (2018)|p. 4]] These examples suggest that questions are part of the process of scientific changes. More specifically, they are a subtype of [[Epistemic Element|epistemic element]].  +
TODO: Add the description  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
R
A '''Reason''' is a theory that is potentially hypothetical, i.e. not accepted, that may serve serve, if accepted, as a [[Sufficient Reason (Palider-2019)]] for accepting another theory. For example, one may say that a double-blind trial will constitute a reason for accepting a drug's efficacy, even if the double-blind trial has not yet been done. A reason is separated into components of [[Implication (Palider-2019)]], [[Normative Inference (Palider-2019)]], and employment of a relevant method. These components as a whole assure that the reason implies the theory to be accepted, normatively infers that the theory to be accepted should be accepted, and that the method by which that normative inference is evaluated is actually employed. Notably, it does not involve the acceptance of the theory serving as a reason, which is what separates a reason from a sufficient reason.  +
This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed. This is a very good reason indeed.  +
The paradox of normative propositions arises from the following three premises: # there have been many historical cases where employed [[Method|scientific methods]] conflicted with professed [[Methodology|methodologies]]; # by [[The Third Law (Barseghyan-2015)|the third law]], employed methods are deducible from accepted theories, including methodologies; # two proposition cannot be mutually inconsistent if one logically follows from another. Sebastien's solution rejects premise (2), by clarifying that an employed method shouldn't necessarily follow from ''all'' accepted theories, but only from ''some''. In those cases, when an employed method is in conflict with an accepted methodology, it is an indication that the former doesn't follow from the latter. As for their mutual inconsistency, that is allowed by [[The Zeroth Law (Harder-2015)|the zeroth law]].  +
The paradox and its resolution explained by Gregory Rupik  +
Although both the theories and methods of science have changed over history and differ across disciplines, the nothing permanent thesis is denied. Instead, the fixed and stable features of science can take the form of dynamics or laws that govern changes in science through a piecemeal approach. A theory of scientific change is possible by positing laws that describe transitions in science and its constituent elements. Although it is generally accepted that there seems to be no static transhistorical properties of science, this does not deny the possibility of laws governing the process of scientific change in a piecemeal fashion. Theories have, of course, proven themselves to be changeable. Methods of practicing and theory appraising have also proven to be changeable. The aims, goals and philosophies behind what science ought to be have also clearly changed. However, although a case can be made for these static properties of science to be non-permanent, perhaps the fundamental mechanism governing how these properties change can be permanent. Hence, the absence of static properties in science fails to show the impossibility of a general theory of scientific change. There is the possibility of a mechanism of scientific change that governs the changes in theories, methods and other elements of science. [[File:Fixed and changing methods.jpg|center|600px]]  +
In The Laws of Scientific Change (2015), Hakob Barseghyan argues that none of the social constructivist theses preclude the possibility of a general theory of scientific change. He provides different reasons to invalidate each of the respective social constructivist theses. The general theory is that the argument from social construction does not undermine the possibility of the theory of scientific change (TSC). Barseghyan shows that each of the theses lead to bizarre implications that form threats not only to the scientonomic project but to all other disciplines that constitute descriptive propositions. Firstly, the contingency thesis does not void the possibility of TSC because the contingency thesis itself is a general theory of scientific change. That is to say, the contingency thesis is itself a general descriptive proposition that is attempting to illustrate the mechanism by which science undergoes changes, namely, that there are no patterns in the evolution of science [[CITE_Barseghyan (2015)|p. 92]]. For example, the idea that Aristotelian-Medieval physics could have been directly replaced by Einstein’s general relativity without the intermediary stage of Newtonian physics is an inference of the contingency thesis; such a claim is itself a descriptive proposition. Therefore, in virtue of the contingency thesis being a descriptive proposition, it falls under the same category of the general theory of scientific change. Hence, the contingency thesis does not invalidate the scientonomic project [[CITE_Barseghyan (2015)|pp. 91-92]]. Secondly, the nominalist thesis similarly does not undermine the possibility of TSC as its claim negates the validity of any descriptive proposition that attempts to describe a particular phenomenon. Given the nominalist claim, disciplines such as Biology, Chemistry and Physics, would have to be discredited because they too constitute descriptive propositions. Therefore, if the nominalist thesis were to be true, it would not only be a particular threat to the general theory of scientific change alone[[CITE_Barseghyan (2015)|p. 92]]. Finally, all three reducibility theses do not endanger the scientonomic project: • The ontological reducibility thesis does not undermine the project because the claim that higher-level systems compose of lower level elements does not imply that there can be no theory describing the higher-level system. For example, in Biology, the study of lower level elements like genes does not imply that a theory at a higher level is not possible: the theory of evolution is a description of a higher-level system, which would not be possible under this thesis. Likewise, the general theory of scientific change is not undermined by this thesis [[CITE_Barseghyan (2015)|p. 94]]. • The epistemic reducibility thesis is not an obstacle to TSC because it has an unprecedented implication just like the other thesis. If the thesis were to be true, then the laws of Biology would be reducible to the laws of Chemistry, which in turn would be reducible to the laws of Physics. The definition of epistemic reducibility is not clearly agreed upon or formulated. But, the basic premise would precludes not only TSC but all our schemes of knowledge. Therefore, it is not a threat to TSC particularly [[CITE_Barseghyan (2015)|p. 94]]. • The methodological reducibility thesis renders all higher-level theories pointless. It implies that only sociology can study the changes in the scientific mosaic and, therefore, TSC is ultimately plausible. But, since all higher-level theories are rendered pointless, all disciplines would be said to be futile with the exception of Physics (note: it is assumed here that all disciplines can be potentially reduced down to Physics). Therefore, just as the other theses, the methodological reducibility thesis does not put forth any danger to the possibility of TSC [[CITE_Barseghyan (2015)|pp. 95-96]].  
S
The [[Scientific Mosaic|scientific mosaic]] is in a process of perpetual change. Most of the theories that we accept nowadays didn’t even exist two or three hundred years ago. Similarly, at least some of the methods that we employ in theory assessment nowadays have nothing to do with the methods employed in the 17th century. Thus, it is safe to say that the process of scientific change involves both theories and methods.[[CITE_Barseghyan(2015)|p.9]] Changes in the scientific mosaic can be viewed as a series of successive frames, where each frame represents a state of that mosaic at a given point of time. Obviously, such a frame would include all accepted theories and all employed methods of the time. [[CITE_Barseghyan(2015)|p. 9]]  +
According to this definition, scientific mosaic encompasses all [[Theory Acceptance|accepted]] theories and [[Employed Method|employed]] methods.[[CITE_Barseghyan (2015)|p. 5]] The definition assumes that theories and methods are the only two fundamental entities that undergo scientific change.[[CITE_Barseghyan (2015)|pp. 5-7]] The reason the set of theories and methods is called a “mosaic” and not, say, “system” is that the elements of the mosaic may or may not be tightly adjusted; there may be considerable gaps between the elements of the mosaic. For instance, nowadays we realize that there is a considerable gap between general relativity and quantum mechanics and, yet, we do not hesitate to accept both. [[CITE_Barseghyan(2015)|p. 5]] While it is not included in the definition, it is understood that the bearer of a mosaic is a [[Scientific Community|scientific community]].[[CITE_Barseghyan (2015)|p. xi]]  +
According to this definition, scientific mosaic encompasses all accepted and employed epistemic elements. The definition is compatible with the ontology of epistemic elements that considers [[Question Is a Subtype of Epistemic Element (Rawleigh-2018)|questions]] and [[Theory Is a Subtype of Epistemic Element (Barseghyan-2015)|theories]] (including [[Method Is a Subtype of Normative Theory (Barseghyan-2018)|methods as a sub-type of normative theories]]) as the only two fundamental [[Subtypes of Epistemic Element|types of epistemic elements]]. In addition, by not referring to any epistemic element explicitly, the definition also purports to be compatible with any future ontology of epistemic elements insofar as that ontology assumes that elements can be accepted and employed.  +
Rather than conceiving a scientific mosaic as a simple set-theoretic unity of epistemic elements, this definition is model-theoretic: it replaces the explicitly set-theoretic wording “set of all epistemic elements” with a semantic “model of all accepted elements”.[[CITE_Rawleigh (2022)|p. 91]] The definition considers a scientific mosaic to be a model for interpreting all natural language sentences, whether those be observational, theoretical, or simply ordinary conversational sentences. TODO: Add Rawleigh's discussion from his 2022.  +
Scientific underdetermination is the thesis that the process of [[Scientific Change|scientific change]] is not deterministic, and science could have evolved differently than it did. Hypothetically, two [[Scientific Community|scientific communities]] developing separately could experience an entirely different sequence of successive states of their respective [[Scientific Mosaic|scientific mosaics.]] Even without the TSC, the implausibility of scientific determinism can be seen by considering the process of [[Theory|theory]] construction, which is outside the present scope of the TSC. Theory construction requires creative imagination, and the formulation of a given theory is therefore not inevitable. Still, underdetermination can also be inferred as a theorem from the axioms of the TSC.[[CITE_Barseghyan (2015)|pp. 196-198]] [[File:Scientific-underdetermination.jpg|607px|center||]] This deductive inference occurs as follows. The theorem is the consequence of two related theses: that of underdetermined method change, and that of underdetermined theory change. Since method change and theory change are exactly the transitions in the scientific mosaic, showing that neither method change nor theory change is deterministic is sufficient to imply the Scientific Underdetermination theorem (SUT). The underdetermination of method change follows from the 3rd Law: “A method becomes employed only when it is deducible from some subset of other employed methods and accepted theories of the time.” As a result of this law, a method can be employed in two ways: either it strictly follows from other accepted theories and employed methods, in which case the change is, in fact, deterministic, or it implements the abstract requirements of some other employed method. In the latter case, the change is underdetermined since abstract requirements can give rise to many different implementations. As a concrete example, we have an accepted theory that states, “when counting the number of living cells, the resulting value is acceptable only if it is obtained with an ‘aided’ eye.” A number of different methods can implement this abstract requirement, like the plating method or the counting chamber method.[[CITE_Barseghyan (2015)|p. 198]] Thus the method is underdetermined by the abstract requirements, so the process of method change implementing these requirements is not deterministic, which is exactly the statement of the underdetermination of method change. The underdetermination of theory change comes from the 2nd law and the possibility of inconclusive theory assessment. The 2nd law states that “In order to become accepted into the mosaic, a theory is assessed by the method actually employed at the time”. This assessment can result in conclusive acceptance, conclusive rejection, or it can be inconclusive. In both conclusive cases, the theory change is deterministic, but if the theory assessment is inconclusive, then the theory can be accepted or rejected. So the process of theory change is not necessarily deterministic, since it is, in fact, possible for assessment to be inconclusive. The question of whether there actually exist cases of inconclusive theory assessment is a task for the history of science, and is irrelevant to our discussion. These two theses combine to form the SUT, since changes in theories and methods are all the transitions that occur in the scientific mosaic, and we have seen that the underdetermination of theory and method change follow deductively from the 2nd law, the 3rd law and the possibility of inconclusive theory assessment.  
Sarwar and Fraser argued that in addition to other epistemic stances, there is also the stance of scientificity. Thus, epistemic agents can consider a theory scientific or unscientific regardless of whether they accept, use, or pursue it. As such, they argue, scientificity is a distinct epistemic stance.[[CITE_Sarwar and Fraser (2018)]]  +
The key stages of the workflow are: * '''Pose Questions:''' The goal of this stage is to scrutinize the current state of the scientonomic theory and our knowledge of scientific change and identify as many open questions as possible. The annual [[Scientonomy Seminar|seminar on scientonomy]] hosted by the University of Toronto's [http://hps.utoronto.ca/ Institute for the History and Philosophy of Science and Technology] is currently the main venue facilitating this stage of the workflow. * '''Suggest Modifications:''' The goal of this stage is to advance our knowledge of scientific change by proposing modifications to our current body of knowledge. These suggested modifications are published and properly documented. These modifications are currently published in the [[Journal of Scientonomy]] but, in principle, they can be published in any journal which makes use of the scientonomic mechanism of modifications. Once a modification is published, this encyclopedia documents that [[:Category:Modification|suggestions]] and invites experts to review it. * '''Evaluate Modifications:''' The goal of this stage is to assess the suggested modifications and decide which of them are acceptable and which are not. This is done by [[Community:Scientonomy|the community of scientonomists]] on the respective discussion pages of this encyclopedia. If a consensus emerges, the fate of the modification is documented. If a modification causes disagreement among scientonomists, it becomes a topic of discussion during [[Scientonomy Workshop|scientonomic workshops]], which aim at bridging the gaps between opposing parties and arriving at consensus. * '''Document Changes:''' The goal of this stage is to document all the changes in our communal body of knowledge. If a modification is considered acceptable by the community, then the respective articles of this encyclopedia are modified to reflect that change. If a modification is considered unacceptable, then the respective verdict is documented for that modification. The primary role of this encyclopedia in the scientonomic workflow is to document the current state of scientonomic knowledge, trace all suggested modifications, and list open questions. Here is an outline of the main stages of the scientonomic workflow: This workflow gives researchers a simple way of knowing where the community stands on different topics, i.e. what theories it currently accepts, what open questions it tries to answer, what modifications have been proposed and how they have been assessed. It ensures that our communal knowledge is advanced in a ''piecemeal'' and ''transparent'' fashion: * '''Piecemeal''': modifications to the communal mosaic are suggested one by one, which allows for a sober critical evaluation of these suggestions by the community. * '''Transparent ''': suggested modifications and their evaluations are properly documented, so that there is no mystery as to whether, when, or why a certain modification was or wasn't accepted. The workflow is ''scalable'', as it can - in principle - be implemented in other fields of digital humanities and beyond.  
Hakob Barseghyan presents the workflow in action  +
Gregory Rupik outlines the scientonomic workflow  +
'''Scientonomy''' is defined as an academic discipline that aims to describe and explain the process of [[Scientific Change|scientific change]]. While still very much in the process of inception, it is conceived to have two major branches - ''theoretical scientonomy'' and ''observational scientonomy''. Theoretical scientonomy attempts to shed light on the ontology and dynamics of the process of scientific change. Observational scientonomy attempts to trace and explain historical and contemporary instances of scientific change. ===The Scope of Scientonomy=== ====The field of scientonomy==== The term scientonomy refers to the newly emerging ''science of science''. If science is considered the systematic study of the natural universe, then the science of science is the systematic study of the social and cognitive processes involving knowledge production. Scientonomy approaches this study in a distinctive way. It is generally accepted nowadays that the body of theories accepted by epistemic agents - individual scientists or epistemic communities - and the methods employed by these agents to evaluate them ''change over time''.[[CITE_Barseghyan (2015)|pp. 217-225]] As the empirical scientific study of this process of scientific change, scientonomy aims at providing a new approach to developing a naturalistic account of how individuals and communities acquire knowledge. It differs from related fields of inquiry, such as history of science or the sociology of scientific knowledge, in that it maintains that the process of scientific change, despite its varied guises, exhibits certain general patterns. It attempts to study and document those patterns by giving them precise formulations. As in any other field of empirical science, the findings of scientonomy are inevitably fallible and are open to modification in the light of new evidence. The basis for this newly emerging field is Barseghyan's [[The Theory of Scientific Change|theory of scientific change]] as propounded in his 2015 book, ''The Laws of Scientific Change''.[[CITE_Barseghyan (2015)]] It builds on the ideas of Kuhn, Lakatos, Laudan, and others, all of which can be considered precursors of scientonomy. The field of scientonomy, given its distinctive concern for both general theory and the explanation of historical particulars is envisioned as having two branches. First, a theoretical branch attempts to uncover the ontology and the general mechanism of scientific change. Secondly, an observational branch attempts to trace and explain individual changes in the mosaics of various epistemic agents.[[CITE_Barseghyan (2015)|pp. 72-80]] ====Theoretical scientonomy==== Though highly relevant to the traditional field of philosophy of science, theoretical scientonomy differs from it in that, as a descriptive scientific field, it does not include the normative question of how science ''should'' be conducted so as to produce reliable knowledge. In the past, when a unitary and fixed scientific method was believed to exist, the descriptive question of how the process of scientific change actually works was often conflated with the normative question of how it should work if reliable knowledge is to be produced. Scientonomy seeks a clear distinction between the two, and claims only the former as its subject matter.[[CITE_Barseghyan (2015)|pp. 12-20]] This restriction is motivated by the same concerns as Bloor's symmetry postulate in the sociology of scientific knowledge.[[CITE_Golinski (1998)]] Scientonomy's descriptive account, however, does include the descriptive study of normative propositions espoused by scientific practitioners such as those contained in their openly accepted norms such as scientific methods or ethical imperatives.[[CITE_Sebastien (2016)]] Theoretical scientonomy concerns itself specifically with two major tasks: # the formulation of a [[Ontology of Scientific Change|standard ontology]] of epistemic entities and relations involved in the process of scientific change; and # the unearthing of the [[Mechanism of Scientific Change|general patterns]] that underlie the process of scientific change. The search for fixed general laws obviates the charge of incoherent relativism sometimes leveled at the sociology of scientific knowledge.[[CITE_Siegel (2011)]] By seeking such laws, scientonomy hopes to illuminate questions such as the nature of scientific rationality, and the naturalistic epistemological question of how knowledge has been acquired. ====Observational scientonomy==== Observational scientonomy is seen as differing from the current history of science discipline in significant ways. History of science currently lacks a guiding theory; specifically, the lack of a standardized ontology often results in incommensurable historical narratives. It also often focuses on the level of individual scientists, their work, and their social context, rather than on epistemic communities. By contrast, scientonomy aims at theory-driven investigations of both individual and communal epistemic agents. It seeks to confront its current theory of scientific change with evidence that may force its alteration, refinement, or replacement, and to apply it to an expanding range of particular cases, thereby enhancing our general understanding of the processes of scientific change. ===Scientonomy vs. Particularism=== ====Scientonomy and the lack of a universal scientific method==== The approach of scientonomy contrasts with that of the particularism favored by some historians, social scientists, and philosophers. Particularism holds that the process of scientific change does not possess the sort of regularities that would render it amenable to any general theory. Its proponents typically make the tacit assumption that in order for a general mechanism of scientific change to exist, there must be a universal and unchanging method of science.[[CITE_Barseghyan (2015)|pp. xi-xvi, 81-97]] Historical evidence now clearly indicates that the methods used by scientists to assess new theories have altered radically over time and between communities. For example, the Aristotelian-medieval method held that a scientific theory should be a set of axioms from which other propositions may be deduced. The axioms should be intuitive in the sense that any person with sufficient experience with the subject should be able to appreciate them.[[CITE_Barseghyan (2015)|pp. 143-144]] Modern physicists would instead maintain that a theory must make novel predictions that are confirmed by observation and experiment.[[CITE_Barseghyan (2015)|p. 145]] Scientonomy accepts the evidence that scientific methods have changed over time and differ between communities, but rejects the implication that this renders a theory of scientific change impossible. Instead, it supposes that changes in both theory and method obey a certain set of laws. It is these laws and not the methods of science, that scientonomy takes to be fixed.[[CITE_Barseghyan (2015)|pp. 82-83]][[CITE_Laudan (1984a)|pp. 33-41]] ====Individual and communal==== Individual scientists differ one from another in their goals, desires, and criteria for theory appraisal. This too might seem to be grounds for rejecting the possibility of a general theory of scientific change. But the decisions to accept new theories, or to employ new methods, are made collectively by scientific communities rather than by individuals acting alone. [[CITE_Longino (2016a)]] Such communities have emergent properties and behaviors that cannot be understood solely in terms of the properties which their members possess separately. Scientonomy supposes that the general regularities it seeks are to be found at the level of whole scientific communities, rather than with the unruly particulars of the work of individual scientists. It thus focuses its investigations at that level.[[CITE_Barseghyan (2015)|pp. 43-52]] ====The apparent lack of general features in science==== The particularist claim that science appears, to superficial observation at least, to possess no general features that have remained fixed through history is not grounds for dismissing the possibility of a theory of scientific change. Theories often reveal that unexpected regularities underlie seemingly disparate phenomena. On the face of it, a point of light revolving in the heavens and a falling apple seem to have nothing whatsoever in common. Newton’s theory of Universal Gravitation asserted, however, that both are movements under the influence of a gravitational force. The theory was highly successful in accounting for both falling bodies and the movements of the planets using a small set of simple general principles. The similarities between the two classes of phenomena only became evident through the formulation of the theory. Success in theory formulation often depends on the ability to identify such unexpected connections.[[CITE_Barseghyan (2015)|p. 86]]  
Presentation of scientonomy by Hakob Barseghyan  +
Scientonomy currently recognizes several different [[Epistemic Stances Towards Theories|stances]] that an [[Epistemic Community|epistemic community]] might take towards a theory. The community might [[Theory Acceptance|accept]] the theory as the best currently available description of the world, it might regard a theory as worthy of [[Theory Pursuit|pursuit]] and further development, or it might regard the theory as adequate for [[Theory Use|use]] for some practical purpose, while not the best description of the world. [[CITE_Barseghyan (2015)|pp. 30-42]] These stances, and their opposites (i.e. that a theory is unaccepted, neglected, or unused)together constitute the range of stances that a community might take towards a theory. The concept of a [[Scientific Mosaic|scientific mosaic]] consisting of the set of all theories accepted, and all methods employed by the community [[CITE_Barseghyan (2015)|pp.1-11]] is central to scientonomy, as is the goal of explaining all changes in this mosaic. To fulfill this central goal, a scientonomic theory ought to explain how transitions from one accepted theory to another take place, and what logic governs that transition, but it doesn't necessarily need to explain why some theories are pursued and others neglected and why some are used and others remain unused. [[CITE_Barseghyan (2015) |p. 42]]  +
It is a task of scientonomy to trace and explain all changes in a mosaic, regardless of which field (discipline) the change concerns. This applies to all fields of inquiry considered scientific by the respective community. For instance, if theology or astrology were parts of the mosaic under study, then a transition from one accepted theological or astrological theory to another during that time period should be explained by scientonomy.  +
Any change in a mosaic is within the scope of scientonomy. Scientonomy should explain not only ''major'' transitions in the mosaic such as those from the Aristotelian-Medieval set of theories to those of Descartes and his followers, but also relatively ''minor'' transitions, such as a transition from "the Solar system has 7 planets" to "the Solar system has 8 planets". The question of actual taxonomy of scales is to be settled by an actual scientonomic theory. A scientonomic theory may distinguish between between grand and minor changes, revolutions and normal-science changes, or hard core and auxiliary changes; in any case, it ought to provide explanations at changes at all levels.  +
Scientonomy ought not to limit its applicability to a restricted time period. If a scientific mosaic can be identified at a certain period in time, then it is a task of scientonomy to explain any and all changes in that mosaic at that time period. Similarly, an observational scientonomists ought not exclude any time period from their domain.  +
The goal of [[scientonomy]] is to give a descriptive account of the process of [[Scientific Change|scientific change]]. Given this goal, it is obvious that it must describe and explain how changes in the [[Scientific Mosaic|mosaic]] of accepted scientific [[Theory|theories]] and employed [[Method|methods]] take place. Any actual instance of scientific change is the result of an appraisal. Therefore, a theory of scientific change ''must'' provide an account of how theories are actually appraised and thereby explain how changes in the mosaic occur. On the other hand, it ''can'' but is ''not required'' to account for the process of theory construction.[[CITE_Barseghyan (2015)|p. 29]]  +
There are at least three sorts of questions that we might ask about the process of [[Scientific Change|scientific change]]; Historical questions having to do with what theories and methods were accepted by a particular community at a particular point in time, theoretical questions about the mechanisms of scientific change, and methodological questions about how scientific change ought to happen and what theories and methods ought to be accepted. The first two questions are descriptive in nature, and the third is normative. [[CITE_Barseghyan (2015)|pp. 12-13]] As the "science of science" [[scientonomy]] seeks a purely descriptive account of processes of change in the [[Scientific Mosaic|scientific mosaic]] and therefore encompasses only historical and theoretical questions. Keeping descriptive scientific questions distinct from questions of normative methodology avoids numerous pitfalls. For example, those who conflate the two sometimes argue that because some method is known to have flaws of logical consistency or soundness, it cannot possibly have been the one that was, in fact, used by scientists. However, there is a great deal historical evidence that scientists actually have used logically flawed methods. Inductive reasoning is a ubiquitous part of science, despite its well known flaws.[[CITE_Vickers (2014)]][[CITE_Barseghyan (2015)|pp. 19-20]] The intrusion of normative concerns could also undermine scientonomy's aspirations to scientific status. If any laws of scientific change discovered were accorded normative force they would become tautological truths incapable being called into question by empirical inquiry.  +
The [[Method|methods]] employed in [[Theory Assessment Outcomes|theory assessment]] do not always correspond to the professed scientific [[Methodology|methodology]], and may be purely implicit. Thus, a scientonomic theory ought to distinguish between accepted methodologies and employed methods. Because of their role in theory assessment, and thus in determining the contents of the [[Scientific Mosaic|scientific mosaic]], a scientonomic theory ought to include employed methods, whether they are explicit or implicit. [[CITE_Barseghyan (2015) |pp. 52-61]]  +
Scientonomy focuses on the [[Scientific Mosaic|scientific mosaic]] of accepted [[Theory|theories]] and employed [[Method|methods]]. In their daily work, individual scientists rely on and formulate theories about the object of their research, and use methods to appraise their theories. Both the theories they believe and the criteria they use to assess them may change over time. Although historians of science have often focused on individual scientists, often those deemed great, like Galileo or Einstein, and the changes in their beliefs as they constructed and assessed theories, [[Scientific Change|changes to the scientific mosaic itself]] happen at the level of the community. Scientonomy thus seeks to focus efforts on the social level of the scientific community rather than on the individual.  +
Singular authority delegation is a sub-type of authority delegation. It describes a situation in which a community delegates authority over some topic to a single community. Instances of singular authority delegation occur commonly in the art world. Typically, the art market recognizes only one individual or community as being the sole expert on matters of attribution for a given artist. For example, the art market always and only consults the Wildenstein Institute to answer questions over the authenticity of paintings by Monet. Another example of singular authority delegation is the relationship between the art market and the two people considered experts on Picasso: Maya Widmaier-Picasso and Claude Ruiz-Picasso. For matters of authenticity concerning the works of Picasso, the art market always and only delegates authority to the combined Maya-Claude mosaic. The art market will only accept a Picasso painting as authentic if ''both'' Maya and Claude agree that it is so. Maya and Claude are two separate authorities, and do not always agree. However, because the art market only delegates authority to a single entity-- the mosaic composed of theories agreed upon by Maya and Claude-- this is an instance of singular authority delegation.  +
The definition tweaks the [[Singular Authority Delegation (Loiselle-2017)|original definition]] of the term by [[Mirka Loiselle|Loiselle]] to ensure that the relationship of singular authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
Sociocultural factors can impact the process of a theory's acceptance when the employed method of the community allows for such factors to affect the process. This is derived by the Second Law alone. For example, a community which ascribes infallible power to a leader or a group of leaders is in a position to accept a theory in virtue of the leaders. Furthermore, such factors can guide a scientific community to reject a theory based on the acceptance of another social theory with which it is at odds. Barseghyan’s Laws of Scientific Change break from the traditional language used in philosophy of science, of internal versus external factors in the mosaic. External factors, a term that has traditionally referred to the influences of societal trends, politics, religion, and so on, if defined as “elements not included in the mosaic” then we must accept that these do not affect the mosaic at the time by the the very definition. This is the result of the fact that the 2nd law introduces new theories in the context of the accepted methods at the time. As a result, the language of “external” factors is problematic.[[CITE_Barseghyan (2015)]] Socio-cultural factors ought to be defined more explicitly. The question is, instead, whether factors such as economics, politics, and religion can influence the theories accepted in the mosaic. It follows from the Second Law that theories are assessed by the method in the mosaic at the time. Therefore, if the method at the time mandates economic, political, religious, or other social requirements to be met by a theory before it is accept, only then do socio-cultural factors influence theory acceptance. Barseghyan provides the example of a hypothetical religious community, with an accepted belief (i.e. theory) that holds that the religion’s High Priest always grasps the true essence of things. By the Third Law, a method may be employed the mosaic that states that any proposition is acceptable, given that the High Priest utters it. In this case, it would appear as though socio-cultural factors are influencing, if not dictating, the course of scientific change in the community. This should not be confused with a case where a High Priest or other elite enforces their beliefs unscientifically, through threats, bribery, or otherwise. Should this happen, the change would be unscientific, as it would violate either the method employed at the time (and thereby the Second Law), or it would be creating a method in the mosaic which does not follow from the accepted theories at the time (and thereby the Third Law).[[CITE_Barseghyan (2015)]]  
Here is the detailed explanation of the deduction: <blockquote>When we refer to the second law, it becomes apparent that sociocultural factors can play part in the process of theory acceptance. In particular, it follows from the second law that something can affect a theory’s acceptance only insofar as it is permitted by the method employed at the time. If, for instance, the current method prescribes that theories are to be judged by their novel predictions, then confirmed novel predictions will be instrumental for the process of theory acceptance. But if the method prescribes that only intuitively true theories are acceptable, then the community’s intuitions will obviously affect the process of theory change. Similarly, if the method of the time ascribes an important role to the position of the dictator or the ruling party then, naturally, the process of theory acceptance will be influenced by the interests of the dictator or the ruling party. In short, it follows from the second law that sociocultural factors can affect a theory’s acceptance insofar as their influence is permitted by the method employed at the time.[[CITE_Barseghyan (2015)|p. 235]] </blockquote>  +
Split due to inconclusiveness can occur when two mutually incompatible theories are accepted simultaneously by the same community.  +
Barseghyan notes that, "when a mosaic split is a result of the acceptance of two new theories, it may or may not be a result of inconclusiveness".[[CITE_Barseghyan (2015)|p. 209]] [[File:Mosaic Split Resulting From Two Mutually Incompatible Theories May Not Be A Result of Inconclusive Theory Assessment.png|578px|center||]] "Thus," he concludes, "if we are to detect any instances of inconclusive theory assessment, we must refer to the case of a mosaic split that takes place with only one new theory becoming accepted by one part of the community with the other part sticking to the old theory. This scenario is covered by the possible mosaic split theorem. We can conclude that when a mosaic split takes place with only one new theory involved, this can only indicate that the outcome of the assessment of that theory was inconclusive."[[CITE_Barseghyan (2015)|pp. 209-210]] This is the deduction of the Split Due to Inconclusiveness Theorem.  +
A [[Procedural Method|procedural method]] is a method which doesn't presuppose any contingent propositions; it can only presuppose necessary truths such as those of mathematics or logic. Given the nature of necessary truths, it is impossible for one such truth to contradict another necessary truth since it must be true in all possible worlds. Therefore, it follows from the '''Method Rejection''' theorem that, since there can be no elements at odds with a necessary truth, any procedural method is, in principle, static.  +
Barseghyan (2015) deduces the Static Procedural Methods theorem as follows and with the following justification:[[CITE_Barseghyan (2015)|pp. 224-225]] <blockquote>By the method rejection theorem, a method is rejected only when other methods that are incompatible with the method in question become employed. Thus, a replacement of a procedural method by another method would be possible if the two were incompatible with each other. However, it can be shown that a procedural method can never be incompatible with any other method, procedural or substantive. Consider first the case of a procedural method being replaced by another procedural method. By definition, procedural methods don’t presuppose anything contingent: they can only presuppose necessary truths. But two necessary truths cannot be incompatible, since necessary truths (by definition) hold in all possible worlds. Therefore, two methods based exclusively on necessary truths cannot be incompatible either; i.e. any two procedural methods are always compatible. Consequently, by the method rejection theorem, one procedural method cannot replace another procedural method. Consider a new necessarily true mathematical proposition that has been proven to follow from other necessary true mathematical propositions. By the second law, this new theorem becomes accepted into the mosaic. The acceptance of this theorem can lead to the invention and employment of a new procedural method based on this new theorem. Yet, this new method can never be incompatible with other employed procedural methods, just as the newly proven theorem can never be incompatible with those theorems which were proven earlier (of course, insofar as all of these theorems are necessary truths). Thus, the only question that remains to be answered here is whether a procedural method can be replaced by a substantive method? Again, the answer is “no”. Substantive methods presuppose some contingent propositions about the world, while procedural methods presuppose merely necessary truths. But a necessary truth is compatible with any other truth (contingent or necessary). Therefore, no newly accepted theory can be incompatible with an accepted necessary truth. In particular, if we take the principles of pure mathematics to be necessarily true, then it follows that no empirical theory (i.e. physical, chemical, biological, psychological, sociological etc.) can be incompatible with the principles of mathematics. Consequently, a new substantive method can never be incompatible with procedural methods. Take the above-discussed abstract deductive acceptance method based on the definition of deductive inference: if a proposition is deductively inferred from other accepted propositions, it is to be accepted. It is safe to say that no substantive method can be incompatible with this requirement, for to do so would mean to be incompatible with the definition of deductive inference, which is inconceivable. Thus, a procedural method can be replaced neither by substantive nor by procedural methods. This brings us to the conclusion that all procedural methods are in principle static.</blockquote> Here is the deduction: [[File:Static-procedural-methods.jpg|607px|center||]]  
Nicholas Overgaard explains the topic  +
A more specialized [[Discipline| discipline]] ''A'' is a subdiscipline of another, more general discipline ''B'', if and only if the set of [[Question| questions]] ''Q<sub>A</sub>'' of ''A'' is a proper subset of the questions ''Q<sub>B</sub>''of ''B'' [[CITE_Patton and Al-Zayadi (2021)]]. For example, cellular neurobiology, the discipline which deals with the cellular properties of nerve cells, is a subdiscipline of neuroscience, which deals with the properties and functions of nervous systems. The [[Scientific Mosaic|scientific mosaic]] consists of [[Theory|theories]] and [[Question|questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] As a whole, a discipline ''A'' consists of a set of accepted questions ''Q<sub>A</sub>'' and the theories which provide answers to those questions, or which those questions presuppose.[[CITE_Patton and Al-Zayadi (2021)]] Questions form hierarchies, with more specific questions being [[Subquestion| subquestions]] of more general questions. Theories find a place in these heirarchies, since each theory is an attempt to answer a certain question, and each question presupposes certain theories. It is sometimes the case that the questions ''Q<sub>B</sub>''of a broader discipline ''B'' can include all of the questions, ''Q<sub>A</sub>'', of ''A'' as subquestions, with the questions of ''A'', formimg a proper subset of the questions of ''B''. In this situation, ''A'' is then said to be a subdiscipline of ''B''.  +
A [[Question| question]] is a topic of inquiry. [[CITE_Rawleigh (2018)]] Questions can constitute hierarchies where more specific questions are subquestions of broader questions. For example, 'Was Peter the Great an emperor of Russia?' is a subquestion of 'Who were the emperors of Russia?' since by answering the former, we are also providing a partial answer to the latter. The latter is, in turn, a subquestion of the broader question 'Who were the rulers of European countries?'. [[CITE_Patton and Al-Zayadi (2021)]] A partial answer to a question is a complete, or direct, answer to one of its subquestions.[[CITE_Beck and Sharvit (2002)]][[CITE_Sharvit and Beck (2001)]][[CITE_Eckardt (2007)]]  +
Barseghyan present the the redrafted ontology  +
A '''Sufficient Reason''' is an agent's ''actual'' reason for accepting a theory. It is sufficient as it guarantees, by the [[Sufficient Reason theorem (Palider-2019)]], that an agent accept the theory for which there is a sufficient reason. A sufficient reason aims to separate the components by which an agent accepts a theory: into acceptance of the theory that serves as a reason, the acceptance of [[Implication (Palider-2019)]], the employment of a method, and the acceptance of [[Normative Inference (Palider-2019)]]. A sufficient reason is to be contrasted with a [[Reason (Palider-2019)]] as it further involves the acceptance of the reason. Some questions surrounding acceptance without a sufficient reason remain to be explored ([[Theory Acceptance without Sufficient Reason]]).  +
The '''Sufficient Reason theorem''' shows how a sufficient reason leads to acceptance. This theorem follows from the definition of a [[Sufficient Reason (Palider-2019)]] and from [[The Second Law (Patton-Overgaard-Barseghyan-2017)]]. By the second law, if a theory satisfies the acceptance criteria of the method employed at the time, it becomes accepted. The claim of this theorem is that if there is a sufficient reason for a theory, then that theory satisfies the acceptance criteria of the time. This claim is justified as follows. The fourth condition of a sufficient reason states that the sufficient reason, alongside the employed method of the time, ''normatively infers'' (see [[Normative Inference (Palider-2019)]]) that the agent should accept the reasoned for theory. This statement is stipulated to mean that the acceptance criteria of the time are satisfied. However, it should be understood as further explicating what it means for the acceptance criteria to be satisfied, rather than simply being equated to a previously vague notion. It specifies that the [[Support (Palider-2019)]] (in condition 2 of a sufficient reason) constitutes strong enough "evidence" for the method to deem the supported theory as one that should be accepted. The conclusion that the supported theory should be accepted roughly means that the assessment of the theory is conclusive, i.e. conclusively in favour of acceptance. By this understanding of normative inference, as explaining what satisfying acceptance criteria means, it follows that when there is a sufficient reason, acceptance criteria are satisfied, hence the supported theory becomes accepted. One thing to note within the second law is that acceptance could potentially occur if assessment is inconclusive. The connection between normative inference and inconclusive assessments has not been explored, but one possible idea is that inconclusive assessments are those that include a permissible normative operator in the conclusion of normative inference.  
When one theory is said to follow from another, then that theory is supported by the other theory. This notion of support relies on that of [[Implication (Palider-2019)]], where support requires that one theory ''implies'' the other. Support, just like implication, is not equated to logical deduction, but just means that there is some rule-governed (or logical) connection between the supported theory and its support for the agent. As such, generally, if an agent considers one theory as evidence for another theory, then that evidence is said to support the theory, regardless of how (in)conclusive the evidence is. Support does not at all guarantee that the supported theory becomes accepted for the agent. What that requires is that the agent further has a [[Sufficient Reason (Palider-2019)]]. Instead, support offers a useful way of talking about pieces of evidence for theories, and what assumptions make the supporting theories evidence.  +
The principle of this theorem is first introduced in [[Barseghyan (2015)]]. We recall that "there are two somewhat distinct scenarios of method employment. In the first scenario, a method becomes employed when it strictly follows from newly accepted theories. In the second scenario, a method becomes employed when it implements the abstract requirements of some other employed method by means of other accepted theories. It can be shown that method rejection is only possible in the first scenario; no method can be rejected in the second scenario. Namely, it can be shown that method rejection can only take place when some other method becomes employed by strictly following from a new accepted theory; the employment of a method that is not a result of the acceptance of a new theory and is merely a new implementation of some already employed method cannot possibly lead to a method rejection."[[CITE_Barseghyan (2015)|p. 174]] As per Barseghyan, it is important to note that "two implementations of the same method are not mutually exclusive and the employment of one doesn’t lead to the rejection of the other".[[CITE_Barseghyan (2015)|p. 176]] Barseghyan illustrates this nicely with the example of cell-counting methods (see below). Furthermore, he writes, "an employment of a new concrete method cannot possibly lead to a rejection of any other employed method. Indeed, if we take into account the fact that a new concrete method follows deductively from the conjunction of an abstract method and other accepted theories, it will become obvious that this new concrete method cannot possibly be incompatible with any other element of the mosaic. We know from the ''zeroth law'' that at any stage the elements of the mosaic are compatible with each other. Therefore, no logical consequence of the mosaic can possibly be incompatible with other elements of the mosaic. But the new method that implemented the abstract method is just one such logical consequence".[[CITE_Barseghyan (2015)|p. 176-7]] According to ''the synchronism of method rejection theorem'', a [[Method|method]] becomes rejected only when some of the [[Theory|theories]] from which it follows become rejected. By the method rejection theorem, a method is rejected when other methods incompatible with it become employed. By the [[The Third Law (Barseghyan-2015)|Third Law]], this can happen only when some of the theories from which it follows are also rejected.[[CITE_Barseghyan (2015)|p. 177-183]]  
Barseghyan explains the deduction:[[CITE_Barseghyan (2015)|pp. 177-178]] <blockquote> By the ''method rejection theorem'', a method is rejected only when other methods incompatible with the method become employed. Thus, we must find out when exactly two methods can be in conflict. In order to find that out, we must refer to the ''third law'' which stipulates that an employed method is a deductive consequence of accepted theories and other methods. Logic tells us that when a new employed method is incompatible with an old method, it is also necessarily incompatible with some of the theories from which the old method follows. Therefore, an old method can be rejected only when some of the theories from which it follows are also rejected.</blockquote> [[File:Synchronism-of-method-rejection.jpg|607px|center||]]  +
As Barseghyan notes, it can be tempting to say that the ''double blind trial method'' replaced ''the blind trial method''. But this is not a correct explication of the method dynamics at play. Barseghyan provides a more detailed explanation in this historical example that helps to explain the ''synchronism of method rejection theorem''. He begins: <blockquote>To be sure, ''the blind trial method'' was replaced in the mosaic, but not by ''the double-blind trial method''. Rather, it was replaced by the abstract requirement that when assessing a drug’s efficacy one must take into account the possible experimenter’s bias. The employment of ''the double-blind trial method ''was due to the fact that it specified this abstract requirement. Its employment ''per se'' had nothing to do with the rejection of the blind trial method.[[CITE_Barseghyan(2015)|p. 178]]</blockquote> He continues his explanation with a closer look at the ''blind trial method'': <blockquote>Recall ''the blind trial method'' which required that a drug’s efficacy is to be shown in a trial with two groups of patients, where the active group is given the real pill, while the control group is given a placebo. Implicit in ''the blind trial method'' was a clause that it is ok if the researchers know which group is which. This clause was based on the tacit assumption that the researchers’ knowledge cannot affect the patients and, thus, cannot void the results of the trial. Although this assumption was hardly ever expressed, it is safe to say that it was taken for granted – we would allow the researchers to know which group of patients is which until we learned about the phenomenon of experimenter’s bias... Once we learned about the possibility of experimenter’s bias, the blind trial method became instantly rejected. More precisely, the acceptance of the ''experimenter’s bias thesis'' immediately resulted in the abstract requirement that, when assessing a drug’s efficacy, one must take the possibility of the experimenter’s bias into account. Consequently, two elements of the mosaic became rejected: the blind trial method and the tacit assumption that the experimenters’ knowledge doesn’t affect the patients and cannot void the results of trials... Now, ''the experimenter’s bias thesis'' yielded the new abstract requirement to take into account the possible experimenter’s bias. This requirement, in turn, replaced the blind trial method with which it was incompatible (by the method rejection theorem).[[CITE_Barseghyan (2015)|p. 178-80]] </blockquote> Therefore, Barseghyan concludes, "the double-blind trial method had nothing to do with the rejection of the blind trial method. By the time the double-blind trial method became employed, the blind trial method had already been rejected. So even if we had never devised the double-blind trial method, the blind trial method would have been rejected all the same".[[CITE_Barseghyan (2015)|p. 180]] In summary, "the rejection of the blind trial method took place synchronously with the rejection of the theory on which it was based".[[CITE_Barseghyan (2015)|p. 180]] Hence, this is a historical example of the ''synchronism of method rejection theorem''.  
Barseghyan answers this question using the following historical example: <blockquote>Once we understood that the unaided human eye is incapable of obtaining data about extremely minute objects (such as cells or molecules), we were led to an employment of the abstract requirement that the counted number of cells is acceptable only if it is acquired with an “aided” eye. This abstract requirement has many different implementations such as ''the counting chamber method'', ''the plating method'', ''the flow cytometry method'', and ''the spectrophotometry method''. What is interesting from our perspective is that these different implementations are compatible with each other – they are not mutually exclusive. In fact, a researcher can pick any one of these methods, for these different concrete methods are connected with a logical OR. Thus, the number of cells is acceptable if it is counted by means of a counting chamber, or a flow cytometer, or a spectrophotometer. The measured value is acceptable provided that it satisfies the requirements of at least one of these methods ... To generalize the point, different implementations of the same abstract method cannot possibly be in conflict with each other, for any concrete method is a logical consequence of some conjunction of the abstract method and one or another accepted theory (by ''the third law'').[[CITE_Barseghyan(2015)|p. 175-6]]</blockquote>  +
"The belief that the nature of a thing cannot be properly studied if it is placed in artificial conditions," Barseghyan writes, was central in Aristotelian-Medieval thought thanks to the strict distinction between ''natural'' and artificial'' in the Aristotelian-Medieval mosaic.[[CITE_Barseghyan (2015)|p. 180-1]] Since it was understood that, "when placed in artificial conditions, a thing does not behave as it is prescribed by its very nature, but as designed by the craftsman", it was accepted in the Aristotelian-Medieval mosaic that "experiments can reveal nothing about the natures of things".[[CITE_Barseghyan (2015)|p. 181]] Barseghyan notes that a "requirement that follows from this belief is that an acceptable hypothesis that attempts to reveal the nature of a thing cannot rely on experimental data; the nature of a thing is to be discovered only by observing the thing in its natural, unaffected state any experiments could not constitute proper study or reveal true information about the nature of things".[[CITE_Barseghyan (2015)|p. 181]] We may call this the ''no experiments'' limitation. Perhaps obviously, the ''no experiments'' limitation came to be rejected. The dynamics of this rejection are important as a historical example of the ''synchronism of method rejection'' theorem. "Importantly," continues Barseghyan, "the rejection was synchronous with the rejection of the natural/artificial distinction".[[CITE_Barseghyan (2015)|p. 182]] Particularly, he notes, the two theories that came to replace the Aristotelian natural philosophy – the Cartesian and Newtonian natural philosophies – both assumed that there is no strict distinction between artificial and natural, that is, "that all material objects obey the same set of laws, regardless of whether they are found in nature or whether they are created by a craftsman".[[CITE_Barseghyan (2015)|p. 182]] In abandoning the artificial/natural distinction, "we also realized that experiments can be as good a source of knowledge about the world as observations. Consequently, we had to modify our method, and accepted the new ''experimental method'': "When assessing a theory, it is acceptable to rely on the results of both observations and experiments".[[CITE_Barseghyan (2015)|p. 182]] Since the ''experimental method'' was incompatible with the previous Aristotelian ''no-experiments method,'' that method, by the ''method rejection theorem,'' was rejected. The crux of this historical episode, as Barseghyan emphasizes, is that the Aristotelian "limitation was rejected simultaneously with the rejection of the natural/artificial distinction on which it was based – exactly as ''the synchronism of method rejection theorem'' stipulates".[[CITE_Barseghyan (2015)|p. 182]]  
[[Barseghyan (2015)]] introduces the '''synchronism of method rejection theorem''' through the following hypothetical example. <blockquote>Let us start with the following case. Suppose there is a new method that implements the requirements of a more abstract method which has been in the mosaic for a while. By the third law, the new method becomes employed in the mosaic. Question: what happens to the abstract method implemented by the new method? The answer is that the abstract method necessarily maintains its place in the mosaic. By the method rejection theorem, a method gets rejected only when it is replaced by some other method which is incompatible with it. But it is obvious that our new method cannot possibly be in conflict with the old method. This is not difficult to show. To say that the new method implements the abstract requirements of the old abstract method is the same as to say that the new method follows from the conjunction of the abstract method and some accepted theories. Yet, if we consider the two methods in isolation, we will be convinced that the abstract ''Method 1'' is a logical consequence of the new ''Method 2''. (When ''Method 2'' implements the requirements of ''Method 1'', ''Method 1'' is necessarily a logical consequence of ''Method 2''.) To rephrase the point, if a theory satisfies the more concrete requirements of ''Method 2'', it also necessarily satisfies the more abstract requirements of ''Method 1''.[[CITE_Barseghyan(2015)|p. 174-5]]</blockquote> This hypothetical illustration aligns well with the historical example of the ''abstract requirement to take the placebo effect into account'' being implemented through the ''blind trial method'': <blockquote>Recall, for instance, the abstract requirement that, when assessing a drug’s efficacy, the placebo effect must be taken into account. Recall also its implementation – the blind trial method. It is evident that when the more concrete requirements of the blind trial method are satisfied, the more abstract requirement to take into account the possibility of the placebo effect is satisfied as well. This is because the abstract requirement is a logical consequence of the blind trial method: by testing a drug’s efficacy in a blind trial, we thus take into account the possible placebo effect.[[CITE_Barseghyan(2015)|p. 175]]<blockquote>  
T
There is accepted propositional technological knowledge which appears to exhibit the same patterns of change as questions, theories, and methods in the natural, social, and formal sciences. Technological theories attempting to describe the construction and operation of artifacts as well as to prescribe their correct mode of operation are not merely used, but also often accepted by epistemic agents. Since technology often involves methods different from those found in science and produces normative propositions, many of which remain tacit, one may be tempted to think that changes in technological knowledge should be somehow exempt from the laws of scientific change. Yet, the discussion of the historical cases of sorting algorithms, telescopes, crop rotation, and colorectal cancer surgeries show that technological theories and methods are often an integral part of an epistemic agent’s mosaic and seem to exhibit the same scientonomic patterns of change typical of accepted theories therein. Thus, propositional technological knowledge can be part of a mosaic.  +
The following passage from [[Barseghyan (2015)|''The Laws of Scientific Change'']] summarizes the gist of the law: <blockquote>According to ''the first law'', any element of the mosaic of accepted theories and employed methods remains in the mosaic except insofar as it is overthrown by another element or elements. Basically, the law assumes that there is certain inertia in the scientific mosaic: once in the mosaic, elements remain in the mosaic until they get replaced by other elements. It is reasonable therefore to call it ''the law of scientific inertia''.[[CITE_Barseghyan (2015)|p. 123]]</blockquote>  +
Pandey provides the following summary of the argument: <blockquote>I then discussed a number of scenarios of theory replacement ''allowed'' by the first law, such as the replacement by negation, the replacement by an answer to a different question, the replacement that involves the rejection of the question, and the replacement by a higher-order proposition. The only scenario, I argued, that the first law ''forbids'' is that of the rejection without any replacement whatsoever, as in the cases of element decay. The very existence of the phenomenon of element decay, therefore, poses a problem for the first law: if element decay is forbidden by the first law, then does this imply that the first law has been falsified? This brought us to our dilemma: either (1) exclude the cases of rejection without replacement from the scope of scientonomy and admit that the first law is a tautology or (2) include the cases of rejection without replacement into the scope of scientonomy and admit that such cases present a serious anomaly for the first law.</blockquote> <blockquote>My solution was to opt for the first option, as it seemed to be the lesser of two evils. In support of this option, I indicated how the procedure of limiting the scope is ubiquitous in many other fields of inquiry; thus, there is nothing inherently vicious in excluding certain non-epistemic phenomena (such as element decay) from the scope of our discipline. I also drew parallels between the scientonomic first law and Newton’s first law: while the latter too has been considered tautological, not many have thought that it is necessarily a serious problem. Thus, the tautological nature of our first law is not inevitably problematic. It is still unclear at this stage whether this should prompt scientonomers to consider alternatives to the first law.[[CITE_Pandey (2023)|p. 43]]</blockquote>  +
Formulated for [[method]]s, the first law states that the implicit expectations employed in theory assessment will continue to be employed until they are replaced by some alternate expectations. Just as is the case for [[The First Law for Theories (Barseghyan-2015)]], this law does not impose limitations on the sort of methods that can replace employed methods.[[CITE_Barseghyan (2015)|p. 125]] However, importantly, "the community never remains with no expectations whatsoever. When facing a new theory, the community always has some implicit expectations concerning such theories. These expectations may be very specific or they may be very abstract and vague, but some expectations are always present, for otherwise no theory assessment would be possible."[[CITE_Barseghyan (2015)|p. 126]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Here are some possibilities for how method replacement by the first law might occur, as formulated by Barseghyan (2015): <blockquote>In the most basic case, a community can reject some of the more specific requirements of its currently employed method and revert to a more abstract method. Alternatively, it can replace those rejected requirements with some new specific requirements. Suppose the employed method stipulates that a new theory must be tested in repeatable experiments and observations. In principle, the community may one day remove some of the ingredients of this method, say, the requirement of repeatability. As a result, the community can either revert to a more abstract method or it can introduce a new requirement to replace the repeatability clause. For instance, the community may revert to the more abstract method which stipulates a new theory must be tested in experiments and observations (no repeatability requirement). Alternatively, it can introduce a new requirement that in addition to empirical testing a new theory must also explain all the facts explained by the accepted theory. Which of these two scenarios materialize at each particular instance is decided by a number of contingent factors.[[CITE_Barseghyan (2015)|p. 125]]</blockquote>  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
According to this formulation of the first law for theories, an accepted [[theory]] remains accepted unless replaced by other theories, even though sometimes that replacement may simply be the negation of the theory. That is, "if for some reason scientists of a particular field stop pursuing new theories, the last accepted theory will safely continue to maintain its position in the mosaic," with no further confirmation of the theory required.[[CITE_Barseghyan (2015)|p. 122]] There is no specification of what sort of theory might replace an accepted theory. Barseghyan notes that, in the most minimal case, a theory may simply be replaced by its own negation.[[CITE_Barseghyan (2015)|p. 122]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Specifically, in contemporary ''empirical'' science, "we do not reject our accepted empirical theories even when these theories face anomalies (counterexamples, disconfirming instances, unexplained results of observations and experiments)."[[CITE_Barseghyan (2015)|p.122-3]] This is known as anomaly-tolerance. Though it cannot be said to be a universal feature of science, it is by no means a new feature, as Barseghyan (2015) observes that "this anomaly-tolerance has been a feature of empirical science for a long time" and provides the following key examples of anomaly-tolerance, following Evans (1958, 1967, 1992), in the context of Newtonian theory.[[CITE_Barseghyan (2015)|p.123]] <blockquote>The famous case of Newtonian theory and Mercury’s anomalous perihelion is a good indication that anomalies were not lethal for theories also in the 19th century empirical science. In 1859, it was observed that the behaviour of planet Mercury doesn’t quite fit the predictions of the then-accepted Newtonian theory of gravity. The rate of the advancement of Mercury’s perihelion (precession) wasn’t the one predicted by the Newtonian theory. For the Newtonian theory this was an anomaly. Several generations of scientists tried to find a solution to this problem. But, importantly, this anomaly didn’t falsify the Newtonian theory. The theory remained accepted for another sixty years until it was replaced by general relativity circa 1920. This wasn’t the first time that the Newtonian theory faced anomalies. In 1750 it was believed that the Earth is an oblate-spheroid (i.e. that it is flattened at the poles). This was a prediction that followed from the then-accepted Newtonian theory, a prediction that had been confirmed by Maupertuis and his colleagues by 1740. However, soon very puzzling results came from the Cape of Good Hope: the measurements of Nicolas Louis de Lacaille were suggesting that, unlike the northern hemisphere, the southern hemisphere is prolate rather than oblate. Thus, the Earth was turning out to be pear-shaped! Obviously, the length of the degree of the meridian measured by Lacaille was an anomaly for the accepted oblate-spheroid view and, correspondingly, for the Newtonian theory. Of course, as with any anomaly, this one too forced the community to look for its explanation by rechecking the data, by remeasuring the arc, and by providing additional assumptions. Although it took another eighty years until the puzzle was solved, Lacaille’s anomalous results didn’t lead to the rejection of the then-accepted oblate-spheroid view. Finally, in 1834-38, Thomas Maclear repeated Lacaille’s measurements and established that the deviation of Lacaille’s results from the oblate-spheroid view were due to the gravitational attraction of Table Mountain. The treatment of Lacaille’s results – as something bothersome but not lethal – reveals the anomaly-tolerance of empirical science even in the 18th century.[[CITE_Barseghyan (2015)|p.123]]</blockquote>  
Barseghyan (2015) provides further examples of anomaly-tolerance that precede Newtonian theory: <blockquote>Take the Aristotelian-medieval natural philosophy accepted up until the late 17th century. Tycho’s Nova of 1572 and Kepler’s Nova of 1604 seemed to be suggesting that, contrary to the view implicit in the Aristotelian-medieval mosaic, there is, after all, generation and corruption in the celestial region. In addition, after Galileo’s observations of the lunar mountains in 1609, it appeared that celestial bodies are not perfectly spherical in contrast to the view of the Aristotelian-medieval natural philosophy. Moreover, observations of Jupiter’s moons (1609) and the phases of Venus (1611) appeared to be indicating that planets are much more similar to the Earth than to the Sun in that they too have the capacity for reflecting the sunlight. All these observational results were nothing but anomalies for the accepted theory which led to many attempts to reconcile new observational data with the accepted Aristotelian-medieval natural philosophy. What is important is that the theory was not rejected; it remained accepted throughout Europe for another ninety years and was overthrown only by the end of the 17th century.[[CITE_Barseghyan (2015)|p.123-4]]</blockquote>  +
Replacement-by-negation is not the only possible scenario for the replacement of accepted proposition. In fact, as is noted in Barseghyan (2015), various scientific communities may have additional requirements for what can replace an accepted proposition. Barseghyan (2015) summarizes the following episode in contemporary physics as an example of a time when a scientific community imposed additional requirements on theory replacement: <blockquote>Consider, for instance, the case of quantum field theory (QFT) in the 1950-60s. In the late 1940s, QFT was successfully applied to electromagnetic interactions by Schwinger, Tomonaga, and Feynman when a new theory of quantum electrodynamics (QED) was created. Hope was high that QFT could also be applied to other fundamental interactions. However, it soon became apparent that the task of creating quantum field theories of weak and strong interactions was not an easy one. It was at that time (the 1950-60s) when QFT was severely criticized by many physicists. Some physicists criticized the techniques of renormalization which were used to eliminate the infinities in calculated quantities. Dirac, for instance, thought that the procedure of renormalization was an “ugly trick”. Another line of criticism was levelled against QFT by Landau, who argued in 1959 that QFT had to be rejected since it employed unobservable concepts such as local field operators, causality, and continuous space time on the microphysical level. It is a historical fact however that, all the criticism notwithstanding, QFT was not rejected.268 In short, there was serious criticism levelled against the then-accepted theory, but it didn’t lead to its rejection, for the physics community of the time didn’t allow for a simple replacement-by-negation scenario.[[CITE_Barseghyan (2015)|p.122]]</blockquote>  +
Barseghyan (2015) contends that "the attitude of the community towards anomalies is historically changeable and non-uniform across different fields of science."[[CITE_Barseghyan (2015)|p.124]] Both anomaly-intolerant and anomaly-tolerant attitudes can prevail in different communities. Firstly, consider the ""historical"" example put forth by Barseghyan (2015): <blockquote>Consider the famous four color theorem currently accepted in mathematics which states that no more than four colors are required to color the regions of the map so that no two adjacent regions have the same color. Suppose for the sake of argument that a map were found such that required no less than five colors to color. Question: how would mathematicians react to this anomaly? Yes, they would check, double-check, and triple-check the anomaly, but once it were established that the anomaly is genuine and it is not a hoax, the proof of the four color theorem would be revoked and the theorem itself would be rejected. Importantly it could be rejected without being replaced by any other general proposition. Its only replacement in the mosaic would be the singular proposition stating the anomaly itself. This anomaly-intolerance is a feature of our contemporary formal science.278 Thus, we have to accept that anomaly-tolerance is not a universal feature of science.[[CITE_Barseghyan (2015)|p.125]]</blockquote> Additionally, Barseghyan (2015) extends this example into a brief ''theoretical'' discussion. That is: <blockquote>The first law for theories doesn’t impose any limitations as to what sort of propositions can in principle replace the accepted propositions; it merely says that there is always some replacement. This replacement can be as simple as a straightforward negation of the accepted proposition, or a full-fledged general theory, or a singular proposition describing some anomaly. The actual attitude of the community may be different at different time periods and in different fields of science.[[CITE_Barseghyan (2015)|p.125]]</blockquote>  
As is noted in the description above, accepted theories may simply be replaced by their negation. Barseghyan (2015) uses a hypothetical example to explain this possibility: <blockquote>Suppose a scientific community accepts that a certain drug is therapeutically efficient in alleviating a certain condition. In principle, this proposition can be replaced by its own negation, i.e. the proposition that the drug is not efficient in alleviating the condition. HSC shows many examples of this sort. Recall, for instance, the medieval and early modern belief that bloodletting is efficient in restoring the proper balance of humors in the body and, thus, restoring health. When this belief was rejected it was simply replaced by its negation.[[CITE_Barseghyan (2015)|p. 122]]</blockquote>  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
The law of compatibility links the compatibility criteria with various assessment outcomes. If [[Compatibility|compatibility]] is defined as the ability of a pair of elements to co-exist in the same mosaic, then the assessment for compatibility is essentially the process by which the epistemic agent decides whether any given pair of elements (i.e. theories, questions, methods) can be simultaneously part of their mosaic. Such an assessment can yield three possible outcomes - ''satisfied'', ''not satisfied'', and ''inconclusive''.[[CITE_Fraser and Sarwar (2018)|p. 73]] Accordingly, the law of compatibility states that if a pair of elements does satisfy the compatibility criteria of the time, then it is deemed as compatible. If, however, an element is taken to be incompatible with the other one, then the pair is deemed as incompatible. Finally, the assessment of compatibility may be inconclusive. In this case, the pair may be deemed compatible, incompatible, or its status may remain unknown. The diagram below summarizes the relation between assessment outcomes and their effects: [[File:The Law of Compatibility Assessment Outcomes (Fraser-Sarwar-2018).png|617px|center||]]  +
According to Fraser and Sarwar, their formulation of the law of compatibility "is non-tautological, as it prohibits certain logical possibilities."[[CITE_Fraser and Sarwar (2018)|p. 73]]  +
This law of method employment is a corollary of [[The Law of Norm Employment (Rawleigh-2022)|Rawleigh's law of norm employment]]. It implies that, just like the norms of all other types, methods become employed when they are derivable from other elements of the agent's mosaic (such as other theories, other methods, and perhaps even questions). As such, the law preserves most of the content of [[The Third Law (Sebastien-2016)|Sebastien's third law]] by solving some of the issues inherent in it. See [[The Law of Norm Employment (Rawleigh-2022)]] for a more thorough exposition.  +
[[The Third Law (Sebastien-2016)|Sebastien's law of method employment]] faces several problems. Foremost among these is that it is based on an outdated ontology that assumes that methods of theory evaluation are a fundamental epistemic element. After the acceptance of [[Modification:Sciento-2018-0006|Barseghyan’s proposal]] that methods be subsumed under the category of normative theories, the third law no longer exhaustively covers all situations cases of employment. In its present form it is limited to methods, though there is no reason to think that the mechanism by which a method is employed is any different than the mechanism by which any other norm is employed. In addition, Sebastien's formulation of the third law uses the term ''deducible'', which currently lacks a scientonomic definition. We do not currently know what it means for something to be deducible, what the criteria of deducibility would be, or whether the conditions of deducibility would be part of the first-order theories of the mosaic or part of the second-order theories that range over the mosaic. The third issue with Sebastien's formulation is that, with the acceptance of questions into the epistemic elements of the ontology of scientific change, the elements of the mosaic are now more expansive than just theories and subtypes of theories. This means that there is a plausible situation in which norms could potentially be derived – at least in part – from questions, which means that a formulation of the third law that excludes questions would fail to comprehensively describe all cases of norm employment. The new law of norm employment aims to remedy all three of these issues: * the formulation of the covers all norms rather than only methods; * it replaces a ''deducible'' with ''derivable'', which in the context of mathematical model theory simply means to ''be semantically entailed'', and thus can potentially include non-deductive inferences (e.g. inductive, abductive); * it replaces a specific enumeration of epistemic elements with a general "elements of the mosaic". This formulation also offers the slight clarification that derivability strictly deals with derivation from a ''finite'' number of other elements.  
The ''law of theory demarcation'' tries to provide a mechanism of how the scientific status of theories changes overtime. The assessment outcomes of the law (satisfied, unsatisfied, and inconclusive) are ''logically'' separated from their consequences. In particular, the assessment outcome of conclusively satisfying the demarcation criteria leads to a theory being scientific, the assessment outcome of consclusively not satisfying the criteria lead to the theory being unscientific, and the final inconclusive outcome can lead to the theory being scientific, unscientific, or uncertain.[[CITE_Sarwar and Fraser (2018)]]  +
[[The Law of Theory Demarcation (Sarwar-Fraser-2018)|The law of theory demarcation]] tries to provide a mechanism of how the scientific status of theories changes overtime. The assessment outcomes of the law (satisfied, unsatisfied, and inconclusive) are ''logically'' separated from their consequences. In particular, the assessment outcome of conclusively satisfying the demarcation criteria leads to a theory being scientific, the assessment outcome of conclusively not satisfying the criteria lead to the theory being unscientific, and the final inconclusive outcome can lead to the theory being scientific, unscientific, or uncertain.[[CITE_Sarwar and Fraser (2018)]]  +
According to Barseghyan's original formulation of the second law, "theories become accepted only when they satisfy the requirements of the methods actually employed at the time. In other words there is only one way for a theory to become accepted – it must meet the implicit expectations of the scientific community".[[CITE_Barseghyan (2015)|p. 129]] According to the law, in order to become accepted, a theory is assessed by the [[Method|method]] employed at the time by the [[Scientific Community|scientific community]] in question.[[CITE_Barseghyan (2015)|p. 129]] The key idea behind the second law is that theories are evaluated by the criteria employed by the community at the time of the evaluation. Thus, different communities employing different method of evaluation can end up producing different assessment outcomes. Barseghyan notes an important consequence of the law: <blockquote>So the question that the historian must ask here is: what were the expectations of the respective scientific communities that allowed for the acceptance of the respective natural philosophies? The second law suggests that, in order to reconstruct the actual method employed at a particular time, we must study the actual transitions in theories that took place at that time.[[CITE_Barseghyan (2015)|p. 130]]</blockquote> A further important consequence of the law has to do with the famous, long-standing debate on the status of novel predictions. Some authors (including Popper, Lakatos, and Musgrave) argue for a special status of novel predictions, where others (like Hempel, Carnap, and Laudan) argue that novel predictions do not substantially differ from post factum explanations or "retro-dictions". But by the second law, as Barseghyan writes, "the whole debate in its current shape is ill-founded".[[CITE_Barseghyan (2015)|p. 131]] Whether novel predictions have a special status, in that "a new theory is expected to have confirmed novel predictions in order to become accepted", is, by the ''second law'', dependent on a community's employed method at the time. Instead of being concerned with all theories in all contexts, we must ask whether theories in specific communities at specific time periods were required to have confirmed novel predictions.  
Barseghyan argued that the second law directly follows from the [[Employed Method (Barseghyan-2015)|the definition of employed method]]. According to him, "since employed method is defined as a set of implicit criteria actually employed in theory assessment, it is obvious that any theory that aims to become accepted must meet these requirements".[[CITE_Barseghyan (2015)|p. 129]] Thus, he argues, the second law is a mere explication of what is implicit in the definition of ''employed method''.  +
According to Barseghyan's initial position, "the second law is not a law in the traditional sense, for normally a law is supposed to have some empirical content, i.e. its opposite should be conceivable at least in principle. Obviously, the second law is a ''tautology'', since it follows from the definition of ''employed method''".[[CITE_Barseghyan (2015)|p. 129, footnote]]  +
The example is presented in [[Barseghyan (2015)|''The Laws of Scientific Change'']]: <blockquote>Even the most “revolutionary” theories must meet the actual requirements of the time in order to become accepted. Einstein’s general relativity is considered as one of the most ground-breaking theories of all time and, yet, it was evaluated in an orderly fashion and became accepted only after it satisfied the requirements of the time. From that episode we can reconstruct what the actual requirements of the time were. It is well known that the theory became accepted circa 1920, after the publication of the results of Eddington’s famous observations of the Solar eclipse of May 29, 1919 which confirmed one of the novel predictions of general relativity – namely, the deflection of light in the spacetime curved due to the Sun’s mass. Thus, it is safe to say that the scientific community of the time expected (among other things) that a new theory must have confirmed novel predictions.[[CITE_Barseghyan (2015)|p. 130]]</blockquote>  +
Another example from ''The Laws of Scientific Change'': <blockquote>Suppose we study the history of the transition from the Aristotelian-medieval natural philosophy to that of Descartes in France and that of Newton in Britain circa 1700. It follows from ''the second law'' that both theories managed to satisfy the actual expectations of the respective scientific communities, for otherwise they wouldn’t have become accepted.[[CITE_Barseghyan (2015)|p. 130]]</blockquote>  +
The second law explained by Hakob Barseghyan  +
According to this formulation of the second law, if a theory satisfies the acceptance criteria of the method actually employed at the time, then it becomes accepted into the mosaic; if it does not, it remains unaccepted; if it is inconclusive whether the theory satisfies the method, the theory can be accepted or not accepted. Unlike [[The Second Law (Barseghyan-2015)|the previous formulation of the second law]], this formulation makes the causal connection between ''theory assessment outcomes'' and ''cases of theory acceptance/unacceptance'' explicit. In particular, it specifies what happens to a theory in terms of its acceptance/unacceptance when a certain assessment outcome obtains. In addition, this new formulation is clearly ''not'' a tautology because it forbids certain logically possible scenarios, such as a theory satisfying the method of the time yet remaining unaccepted.  +
[[The Second Law (Patton-Overgaard-Barseghyan-2017)|The reformulation of the second law]] by Patton, Overgaard, and Barseghyan makes it explicit that the law is ''not'' a tautology as it clearly forbids certain logically conceivable courses of events.[[CITE_Patton, Overgaard, and Barseghyan (2017)|pp. 33-34]]  +
The second law explained by Gregory Rupik  +
Barseghyan's formulation of the third law states that a [[Method|method]] becomes [[Employed Method|employed]] only when it is deducible from other employed methods and accepted [[Theory|theories]] of the time. "Essentially," Barseghyan writes, "the third law stipulates that our accepted theories shape our employed methods".[[CITE_Barseghyan (2015)|p. 132]] According to this formulation, a method becomes employed when: # it strictly follows from some other employed methods and accepted theories, ''or'' # it implements some abstract requirements of other employed methods. In a nutshell, this suggests that [[Theory Acceptance|accepted theories]] shape the set of [[Employed Method|implicit criteria employed]] in theory assessment. In practice, the third law states that when a new phenomenon is discovered, this discovery produces an abstract requirement to take that discovery into account when testing relevant theories. This abstract requirement is then specified by a new employed method. The third law does not stipulate how methods should go about specifying any new abstract requirement. The third law functions as a descriptive account of how methods change, and is not responsible for describing how methods ought to change. As such, it is an effective means of explicating the requirements of other employed methods. The third law has an important corollary: scientific change is not necessarily a ''synchronous '' process, which notably differs from Kuhn's view of scientific change as a ''wholesale,'' ''synchronous'' process.[[CITE_Barseghyan (2015)|p. 151]] This corollary is known as the [[Asynchronism of Method Employment theorem (Barseghyan-2015)]].  +
The ''third law'' has also proven useful in explicating such requirements as Confirmed Novel Predictions (CNP).[[CITE_Barseghyan (2015)|pp. 146-150]] According to the hypothetico-deductive method, a theory which challenges our accepted ontology must provide CNP in order to become accepted. However, the history of CNP has been a point of confusion for some time. By the Third Law, one can show that the requirement of CNP has not always been expected of new theories. When Newton published his Principia (~1740), CNP were not a requirement of his professed method, yet they were still provided. This is also true in the cases of Fresnel's wave theory of light (~1820), Einstein's general relativity (~1920), continental drift theory (1960s), and electroweak unification (1970s).[[CITE_Barseghyan (2015)|p. 146]] On the other hand, Clark’s law of diminishing returns (1900) had no such predictions. They also played no role in the acceptance of Mayer's lunar theory (1760s), Coulomb's inverse square law (early 1800s), the three laws of thermodynamics (1850s), and quantum mechanics (1927).[[CITE_Barseghyan (2015)|p. 146]] Barseghyan explains that this indicates that is because "we do expect confirmed novel predictions but only in very special circumstances. There was one common characteristic in all those episodes… they all altered our views on the structural elements of the world".[[CITE_Barseghyan (2015)|p. 146]] For instance, in our key examples, Newton’s proposal of unobservable entities, such as gravity and absolute space, challenged the ''accepted ontology'' of the time, while Clark’s simply accounted for the data already available. Barseghyan presents his historical hypothesis that this specific requirement for CNP has been employed in natural science since the 18th century. Assuming he is correct (for the sake of argument), he continues: "The ''third law'' stipulates that the requirement of confirmed novel predictions could become employed only if it was a deductive consequence of the accepted theories and other employed methods of the time. So a question arises: what theories and methods does this requirement follow from?".[[CITE_Barseghyan (2015)|pp. 147-148]] Barseghyan answers the question with two principles. For one, there is a principle, implicit in our contemporary mosaic and accepted since the eighteenth century, that states: "the world is more complex than it appears in observations, that there is more to the world than meets the eye".[[CITE_Barseghyan (2015)|p. 148]] Thus, observations may not tell the whole story, as what we observe may an effect of an unobservable. Secondly, "it has been accepted since the early eighteenth century that, in principle, any phenomenon can be produced by an infinite number of different underlying mechanisms".[[CITE_Barseghyan (2015)|p. 148]] "This leads us to the thesis of underdetermination that, in principle, any finite body of evidence can be explained in an infinite number of ways".[[CITE_Barseghyan (2015)|p. 148]] Therefore: <blockquote> The abstract requirement that follows from these two principles is that whenever we assess a theory that introduces some new internal mechanisms (new types of sub-stances, particles, forces, fields, interaction, processes etc.) we must take into account that this hypothesized internal mechanism may turn out to be fictitious even if it manages to predict the known phenomena with utmost precision. In other words, we ddo not tolerate "fiddling" with the ''accepted ontology;'' if a theory attemptes to modify the accepted ontology, it must show that it is not cooked-up.[[CITE_Barseghyan (2015)|p. 148]]</blockquote> This abstract requirement can then be implemented in several ways, including through our contemporary requirement of ''confirmed novel predictions''. This is an illustration of the second scenario of method employment. Thus, in utilizing the third law, one can discover both when certain criteria become an implicit rule and under what conditions they are necessary.  
As Barseghyan explains, ''the double-blind trial method'' "is based on our belief that by performing a double-blind trial we forestall the chance of unaccounted effects, placebo effect, and experimenter’s bias".[[CITE_Barseghyan (2015)|p. 141]] The propositions that this premise is based on in turn derive from theories that are acecpted; for example, "our belief that a trial with two similar groups minimizes the chance of unaccounted effects follows from our knowledge about statistical regularities, i.e. from our belief that two statistically similar groups can be expected to behave similarly ''ceteris paribus''".[[CITE_Barseghyan (2015)|p. 142]] Similarly, our knowledge of physiology and psychology lead to our understanding that we can void the placebo effect with fake pills.[[CITE_Barseghyan (2015)|p. 142]] Our knowledge of psychology allows us to understand that researchers can bias patients from their own knowledge of which group is which.[[CITE_Barseghyan (2015)|p. 142]] Clearly, these premises, although trivial, are currently accepted within our scientific mosaic.[[CITE_Barseghyan (2015)|p. 142]] Hence, the ''double-blind trial method'', although an ''implementation'' of abstract requirements, is still based on our currently accepted theories. This is true in all scenarios of ''implementation''.[[CITE_Barseghyan (2015)|p. 142]] Thus, methods follow deductively from elements of the mosaic whether they follow strictly from theories and methods or implement abstract requirements. This is an important similarity between the two scenarios for method employment.  +
In Barseghyan’s explication of the Aristotelian-Medieval method, he illustrates how Aristotelian natural philosophy impacted the method of the time.[[CITE_Barseghyan (2015)|p. 143]] Most notable is the acceptance of teleology – a theory which states that every thing has a nature it seeks to fulfill (e.g. an acorn’s nature is to become an oak tree). The best theories, then, would uncover the nature of a thing. If only the best theories are acceptable, this leads to the abstract requirement that "A theory is acceptable only if it grasps the nature of a thing". It stood to reason that the nature of a thing can only be intuitively grasped by an experienced person. This fundamental belief, combined with the abstract requirement outline above, led to a method which specifies these requirements known as the Aristotelian-Medieval method: "A proposition is acceptable if it grasps the nature of a thing through intuition schooled by experience, or if it is deduced from general intuitive propositions".[[CITE_Barseghyan (2015)|p. 145]] This is an illustration of how employed methods are deductive consequences of the accepted theories of the time.  +
"How exactly can changes in accepted theories trigger changes in employed methods? What is the precise mechanism of method change? How do methods become employed?".[[CITE_Barseghyan (2015)|p. 136]] Barseghyan presents the example of testing a new drug for alleviating depression to as an example of the third law and in answer to these questions. In summary, the evolution of the drug trial methods is an example of the third law in action. For example, the discovery of the placebo effect in drug testing demonstrates that fake treatment can cause improvement in patient symptoms. As a result of its discovery the abstract requirement of “when assessing a drug’s efficacy, the possible placebo effect must be taken into account” was generated. This abstract requirement is, by definition, an accepted theory which stipulates that, if ignored, substantial doubt would be cast on any trial. As a result of this new theory, the Single-Blind Trial method was devised. The currently employed method in drug testing is the Double-Blind Trial, a method which specifies all of the abstract requirements of its predecessors. It is an apt illustration of how new methods are generated through the acceptance of new theories, as well as how new methods employ the abstract requirements of their predecessors.[[CITE_Barseghyan (2015)|pp. 132-152]] Specifically, Barseghyan begins with the question "How can we ensure that the improvement was due to the drug itself and not due to other unaccounted factors?" The question is answered by the implementation of a ''controlled trial'', wherein "we organize a trial with two groups of patients with the same condition – the active group and the control group. Only the patients in the active group receive the drug".[[CITE_Barseghyan (2015)|p. 134]] <blockquote>What we have here is a transition from one method to another triggered by a new piece of knowledge about the world. The initial method was something along the lines of hypothetico-deductivism: we had a hypothesis “the drug is effective in alleviating depression” and we wanted to confirm it experimentally. Once we learnt that the alleviation may be due to other factors, our initial method was modified to require that a drug’s efficacy must be tested in a controlled trial.[[CITE_Barseghyan (2015)|p. 134]]</blockquote> Another transition in method occurred when upon the discovery of the ''placebo effect'', or the fact "that the improvement in patients’ condition can be due to the patients’ belief that the treatment will improve their condition".[[CITE_Barseghyan (2015)|p. 135]] Now, <blockquote>it was no longer sufficient to have two groups of patients. If only one of the two groups received the drug then the resulting positive effect could be due to the patients’ belief that the drug was really efficient in alleviating their condition. The solution was to organize a ''blind trial.'' We take two groups of patients with the condition, but this time we make sure that both groups of patients believe that they undergo treatment. However, only the patients of the active group receive the real drug; to the patients in the control group we give a placebo (fake treatment).[[CITE_Barseghyan (2015)|p. 135]]</blockquote> Once again, Barseghyan writes, "this is an instance of a method change brought about by a change in accepted theories".[[CITE_Barseghyan (2015)|p. 135]] <blockquote>But why are we forced to introduce this new requirement to our method of drug testing? Well, because this new requirement follows deductively from two elements of the mosaic – from our knowledge that the results of testing a hypothesis about a drug’s efficacy may be voided by the placebo effect and from a more fundamental requirement that we must accept only the best available hypotheses.[[CITE_Barseghyan (2015)|p. 137]]</blockquote> Notably, "while the new requirement is abstract (“the possible placebo effect must be taken into account”), the blind trial method is concrete, for it prescribes how exactly the testing should be done. Thus, ''the blind trial method'' specifies the new abstract requirement. This is the relation of ''implementation'': a more concrete method implements the requirements of a more abstract method by making them more concrete".[[CITE_Barseghyan (2015)|p. 138]] That is, ''the blind trial method'' is not the only possible ''implementation'' of the abstract requirement to take the placebo effect into account.[[CITE_Barseghyan (2015)|p. 138]] In Barseghyan's words, "the same abstract requirement can have many different implementations".[[CITE_Barseghyan (2015)|p. 139]] A final change in method occurred when ''experimenter's bias'' was discovered: <blockquote>The researchers that are in contact with patients can give patients conscious or unconscious hints as to which group is which. It is possible that the positive effect of the drug established in a blind trial was due to the fact that the patients in the placebo group knew that they were given a placebo. The method of drug testing was modified yet again to reflect this newly discovered phenomenon. The contemporary approach is to perform a double-blind trial where neither patients nor researchers know which group is which.[[CITE_Barseghyan (2015)|pp. 135-136]]</blockquote> [[File:Double Blind Trial Deduction (Barseghyan-2015-139).png|568px|center||]] The ''double-blind trial method'' is a further example of the relation of ''implementation''.  
The third law explained by Hakob Barseghyan  +
The [[The Third Law (Barseghyan-2015)|initial formulation]] of the law, proposed by Barseghyan in [[Barseghyan (2015)|''The Laws of Scientific Change'']], stated that a [[Method|method]] becomes [[Employed Method|employed]] only when it is deducible from other employed methods and accepted theories of the time.[[CITE_Barseghyan (2015)|p.132]] In that formulation, it wasn't clear whether employed methods follow from ''all'' or only ''some'' of the accepted theories and employed methods of the time. This led to a logical paradox which this reformulation attempts to solve.[[CITE_Sebastien (2016)]] This reformulation of the law makes explicit that an employed method need not necessarily follow from ''all'' other employed methods and accepted theories but only from ''some'' of them. This made it possible for an employed method to be logically inconsistent and yet [[The Zeroth Law|compatible]] with openly accepted [[Methodology|methodological dicta]]. In all other respects, this formulation preserves the gist of Barseghyan's original formulation. According to the third law, a method becomes employed when: # it strictly follows from some subset of other employed methods and accepted theories, ''or'' # it implements some abstract requirements of other employed methods. This restates Barseghyan's original suggestion that [[Theory Acceptance|accepted theories]] shape the set of [[Employed Method|implicit criteria employed]] in theory assessment. When a new theory is accepted, this often leads to the employment of an abstract requirement to take that new theory into account when testing relevant contender theories. This abstract requirement is then specified by a new employed method. The evolution of the drug trial methods is an example of the third law in action. For example, the discovery of the placebo effect in drug testing demonstrates that fake treatment can cause improvement in patient symptoms. As a result of its discovery the abstract requirement of “when assessing a drug’s efficacy, the possible placebo effect must be taken into account” was generated. This abstract requirement is, by definition, an accepted theory which stipulates that, if ignored, substantial doubt would be cast on any trial. As a result of this new theory, the Single-Blind Trial method was devised. The currently employed method in drug testing is the Double-Blind Trial, a method which specifies all of the abstract requirements of its predecessors. It is an apt illustration of how new methods are generated through the acceptance of new theories, as well as how new methods employ the abstract requirements of their predecessors.[[CITE_Barseghyan (2015)|pp. 132-152]] In Barseghyan’s explication of the Aristotelian-Medieval method, he illustrates how Aristotelian natural philosophy impacted the method of the time. One of the key features of the Aristotelian-scholastic method was the requirement of intuition schooled by experience, i.e. that a proposition is acceptable if it grasps the nature of a thing though intuition schooled by experience. The requirement itself was a deductive consequence of several assumptions accepted at the time. One of the assumptions underlying this requirement was the idea that every natural thing has a nature, a substantial quality that makes a thing what it is (e.g. a human's nature is their capacity of reason). Another assumption underlying the requirement was the idea that nature of a thing can be grasped intuitively by those who are most experienced with the things of that type. The requirements of the intuitive truth followed from these assumptions. The scholastic-Aristotelians scholars wouldn’t require intuitive truths grasped by an experienced person if they didn’t believe that things have natures that could be grasped intuitively by experts. The third law has also proven useful in explicating such requirements as Confirmed Novel Predictions (CNP). According to the hypothetico-deductive method, a theory which challenges our accepted ontology must provide CNP in order to become accepted. However, the history of CNP has been a point of confusion for some time. By the Third Law, one can show that the requirement of CNP has not always been expected of new theories. When Newton published his Principia, CNP were not a requirement of his professed method, yet they were still provided. On the other hand, Clark’s law of diminishing returns had no such predictions. This is because Newton’s proposal of unobservable entities, such as gravity and absolute space, challenged the accepted ontology of the time, while Clark’s simply accounted for the data already available. Thus, in utilizing the Third Law, one can discover both when certain criteria become an implicit rule and under what conditions they are necessary.  
Sebastien's third law explained by Gregory Rupik  +
The initial third law explained by Hakob Barseghyan  +
Harder's reformulation of the Zeroth Law states that “at any moment of time, the elements of the mosaic are compatible with each other”. ''Compatibility'' is a broader concept than strict logical ''consistency'', and is determined by the compatibility criteria of each mosaic. In Barseghyan's presentation of the Zeroth Law, he explains it thus: "The law of compatibility has three closely linked aspects. First, it states that two theories simultaneously accepted in the same mosaic cannot be incompatible with one another. It also states that at any moment two simultaneously employed methods cannot be incompatible with each other. Finally, it states that, at any moment of time, there can be no incompatibility between accepted theories and employed methods".[[CITE_Barseghyan (2015)||pp.157]] Importantly, the Zeroth Law extends only to theories and methods that are ''accepted'', not merely ''used'' or ''pursued''. What does it mean that the ''law of compatibility'' also extends to employed ''methods''? This matter receives significant attention in [[Barseghyan (2015)]]. As per Barseghyan, if two disciplines employ different requirements, their methods are not incompatible as they apply to two different disciplines, they merely "appear conflicting".[[CITE_Barseghyan (2015)||pp.162]] Even considering methods in the same discipline, two methods that "appear conflicting" are not necessarily incompatible. For instance, these methods may either be complementary ("connected by a logical AND"), providing multiple requirements for new theories, or provide ''alternative'' requirements for new theories ("connected by a logical OR").[[CITE_Barseghyan (2015)||pp.162-3]] Thus, Barseghyan asserts that methods are only incompatible "when they state ''exhaustive'' conditions for the acceptance of a theory. Say the first method stipulates that a theory is acceptable if and only if it provides confirmed novel predictions, while the second method requires that in order to become accepted a theory must necessarily solve more problems than the accepted theory. In this case, the two methods are incompatible and, by the ''law of compatibility'', they cannot be simultaneously employed".[[CITE_Barseghyan (2015)||pp.163]] Barseghyan also proposes that the only possible conflict between ''methods'' and ''theories'' is an indirect one, given that theories are descriptive propositions, whereas methods are prescriptive and normative. Thus, the method would have to be incompatible with those methods which follow from the theory for the method and theory to be incompatible. We should be careful not to confuse the concepts of ''compatibility'' and ''consistency''. Barseghyan details the distinction between these two concepts: <blockquote>"the formal definition of inconsistency is that a set is inconsistent just in case it entails some sentence and its negation, i.e. ''p'' and ''not-p''. The classical logical principle of noncontradiction stipulates that ''p'' and ''not-p'' cannot be true ... In contrast, the notion of compatibility implicit in the zeroth law is much more flexible, for its actual content depends on the criteria of compatibility employed at a given time. As a result, the actually employed criteria of compatibility can differ from mosaic to mosaic. While in some mosaics compatibility may be understood in the classical logical sense of consistency, in other mosaics it may be more flexible ... in principle, there can exist such mosaics, where two theories that are inconsistent in the classical logical sense are nevertheless mutually compatible and can be simultaneously accepted within the same mosaic. In other words, a mosaic can be ''inconsistency-intolerant'' or ''inconsistency-tolerant'' depending on the criteria of compatibility employed by the scientific community of the time"[[CITE_Barseghyan (2015)||pp.154]].</blockquote> The abstract criteria of compatibility have many possible implementations with in a community. These criteria are employed [[method|methods]], and therefore can change over time according to [[The Third Law (Barseghyan-2015)|the law of method employment]]. They dictate the standard that other theories and methods must meet so as to remain compatible with each other. The compatibility criterion of the contemporary scientific mosaic is believed to be along the lines of a non-explosive paraconsistent logic.[[CITE_Priest, Tanaka, and Weber (2015)]] This logic allows known contradictions, like the contradiction between signal locality in special relativity and signal non-locality in quantum mechanics to coexist without implying triviality. The compatibility criterion can be understood as a consequence of fallibilism about science. Even a community's best theories are merely truth-like, not strictly true. Our current compatibility criteria appears to be formulated as such. It is very likely that our current compatibility criteria has not always been the one employed. Discovery of the kind of compatibility criteria contained in the current and historical mosaics is an important empirical task for observational scientonomy. The zeroth law is thus named to emphasize that it applies to the mosaic while viewed from a ''static'' perspective. The other three laws take a ''dynamic'' perspective.[[CITE_Barseghyan (2015)||pp.153]].  
According to Fraser and Sarwar, [[The Zeroth Law (Harder-2015)|Harder's formulation of the zeroth law]] "does not have any empirical content, because it follows directly from the notion of compatibility".[[CITE_Fraser and Sarwar (2018)|p. 69]]  +
As per Barseghyan, "In the second scenario (of inconsistency tolerance), we are normally willing to tolerate inconsistencies between an accepted general theory and a singular proposition describing some anomaly. In this scenario, the general proposition and the singular proposition describe the same phenomenon; the latter describes a counterexample for the former. However, the community is tolerant towards this inconsistency for it is understood that anomalies are always possible ... We appreciate that both the general theory in question and the singular factual proposition may contain grains of truth. In this sense, we are anomaly-tolerant".[[CITE_Barseghyan (2015)||pp.160]]  +
Barseghyan presents the following example of two hypothetical communities to illustrate the notion of ''incompatibility tolerance''. <blockquote>First, imagine a community that believes that all of their accepted theories are absolutely (demonstratively) true. This ''infallibilist'' community also knows that, according to classical logic, p and not-p cannot be both true. Since, according to this community, all accepted theories are strictly true, the only way the community can avoid triviality is by stipulating that any two accepted theories must be mutually consistent. In other words, by the third law, they end up employing the classical logical law of noncontradiction as their criterion of compatibility. Now, imagine another community that accepts the position of ''fallibilism''. This community holds that no theory in empirical science can be demonstratively true and, consequently, all accepted empirical theories are merely quasi-true. But if any accepted empirical theory is only quasi-true, it is possible for two accepted empirical theories to be mutually inconsistent. In other words, this community accepts that two contradictory propositions may both contain grains of truth, i.e. to be quasi-true. [[CITE_Bueno et al (1998)]]. In order to avoid triviality, this community employs a paraconsistent logic, i.e. a logic where a contradiction does not imply everything. This fallibilist community does not necessarily reject classical logic; it merely realizes that the application of classical logic to quasi-true propositions entails triviality. Thus, the community also realizes that the application of classical principle of noncontradiction to empirical science is problematic, for no empirical theory is strictly true. As a result, by the third law, this community employs criteria of compatibility very different from those employed by the infallibilist community.[[CITE_Barseghyan (2015)||pp.154-6]]</blockquote>  +
Barseghyan presents the following example of the indirect incompatibility that can exist between theories and methods: <blockquote>Say there is an accepted theory which says that better nutrition can improve a patient’s condition. We know from the discussion in the previous section that the conjunction of this proposition with the basic requirement to accept only the best available theories yields a requirement that the factor of improved nutrition must be taken into account when testing a drug’s efficacy. Now, envision a method which doesn’t take the factor of better nutrition into account and prescribes that a drug’s efficacy should be tested in a straightforward fashion by giving it only to one group of patients. This method will be incompatible with the requirement that the possible impact of improved nutrition must be taken into account. Therefore, indirectly, it will also be incompatible with a theory from which the requirement follows.[[CITE_Barseghyan (2015)|p.163-4]]</blockquote>  +
Barseghyan writes that "the conflict between general relativity and quantum physics is probably the most famous illustration of this phenomenon," that phenomenon being the knowing acceptance of two contradicting theories by a community. "We normally take general relativity as the best description of the world at the level of massive objects and quantum physics as the best available description of the micro-world. But we also know that, from the classical logical perspective, the two theories contradict each other. The inconsistency of their conjunction becomes apparent when they are applied to objects that are both extremely massive and extremely small (i.e. a singularity inside a black hole)".[[CITE_Barseghyan (2015)||pp.154]] Relativity maintains that all signals are local. That is, no signal can travel faster than light. Quantum theory, on the other hand, predicts faster than light influences. This has been known since the 1930's,[[CITE_Einstein, Podolsky, and Rosen (1935)]] yet both quantum theory and relativity remain in the mosaic. Yet, despite the existence of this contradiction, the community accepts both theories as the best available descriptions of their respective domains.  +
Barseghyan presents the following historical examples of the simultaneous pursuit of mutually incompatible theories. Of course, we should note that there is "nothing extraordinary" about this: it is the pursuit of different options that makes scientific change possible! <blockquote>Take for instance, Clausius’s attempt to derive Carnot’s theorem, where he drew on two incompatible theories of heat – Carnot’s caloric theory of heat, where heat was considered a fluid, and also Joule’s kinetic theory of heat, where the latter was conceived as a “force” that can be converted into work.[[CITE_Meheus (2003)]]. Thus, the existence of incompatible propositions in the context of pursuit is quite obvious. There is good reason to believe that “reasoning from an inconsistent theory usually plays an important heuristic role”[[CITE_Meheus(2003)||pp.131]] and that "the use of inconsistent representations of the world as heuristic guideposts to consistent theories is an important part of scientific discovery"[[CITE_Smith(1988)||pp.429]].[[CITE_Barseghyan (2015)||pp.158]]</blockquote>  +
Barseghyan presents the following example of the possibility for simultaneous use of mutually incompatible theories, even in the same scientific project. "Circa 1600, astronomers could easily use both Ptolemaic and Copernican astronomical theories to calculate the ephemerides of different planets. Similarly, in order to obtain a useful tool for calculating atomic spectra, Bohr mixed some propositions of classical electrodynamics with a number of quantum hypotheses.[[CITE_Smith (1988)]] Finally, when nowadays we build a particle accelerator, we use both classical and quantum physics in our calculations. Thus, sometimes propositions from two or more incompatible theories are mixed in order to obtain something practically useful".[[CITE_Barseghyan (2015)||pp.157-8]]  +
The zeroth law explained by Hakob Barseghyan  +
We find two hypothetical scenarios for ''inconsistency tolerance'' in [[Barseghyan (2015)]]. Here is the first: <blockquote> We seem to be prepared to accept two mutually inconsistent propositions into the mosaic provided that they do not have the same object. More specifically, two propositions seem to be considered compatible by the contemporary community when, by and large, they explain different phenomena, i.e. when they have sufficiently different fragments of reality as their respective objects. When determining the compatibility or incompatibility of any two theories, the community seems to be concerned with whether the theories can be limited to their specific domains. Suppose ''Theory 1'' provides descriptions for phenomena ''A'', ''B'', and ''C'', while ''Theory 2'' provides descriptions for phenomena ''C'', ''D'', and ''E''. Suppose also that the descriptions of phenomenon ''C'' provided by the two theories are inconsistent with each other ... Although the two theories are logically inconsistent, normally this is not an obstacle for the contemporary scientific community. Once the contradiction between the two theories becomes apparent, the community seem to be limiting the applicability of at least one of the two theories by saying that its laws do not apply to phenomenon ''C''. While limiting the domains of applicability of conflicting theories, we may still believe that the laws of both theories should ideally be applicable to phenomenon ''C''. Yet, we understand that currently their laws are not applicable to phenomenon ''C''. In other words, we simply concede that our current knowledge of phenomenon ''C'' is deficient.[[CITE_Barseghyan (2015)||pp.158-9]]</blockquote> The most readily apparent example of this phenomenon is the oft-cited conflict between general relativity and quantum physics: "While we admit that ideally singularities within black holes must be subject to the laws of both theories, we also realize that currently the existing theories cannot be consistently applied to these objects, for combining the two theories is not a trivial task. Consequently, we admit that there are many aspects of the behaviour of these objects that we are yet to comprehend. Thus, it is safe to say that nowadays we accept the two theories only with a special “patch” that temporarily limits their applicability".[[CITE_Barseghyan (2015)||pp.159]] To Barseghyan, then, "it appears as though the reason why the community considers the two theories compatible despite their mutual inconsistency is that these theories are the best available descriptions of two considerably different domains".[[CITE_Barseghyan (2015)||pp.159]]  
At any moment of the history of science, there are certain ''theories'' that the scientific community of the time accepts as the best available descriptions of their respective domains. According to the original definition of the term suggested in [[Barseghyan (2015)|''The Laws of Scientific Change'']], the class of ''theory'' includes only those propositions which attempt to describe a certain object under study. A theory may refer to any set of propositions that attempt to describe something. Theories may be empirical (e.g. theories in natural or social science) or formal (e.g. logic, mathematics). Theories may be of different levels of complexity and elaboration, for they may consist of hundreds of systematically linked propositions, or of a few loosely connected propositions. They may or may not be axiomatized, formalized, or mathematized. It encompasses all proposition which attempt to tell us how things were, are or will be, i.e. substantive propositions of empirical and formal sciences. The definition excludes [[Normative Theory|normative propositions]], such as those of methodology, ethics, or aesthetics.[[CITE_Barseghyan (2015)|pp. 3-5]] Examples of theories satisfying the definition include the theory that the Earth is round, Newton's laws of universal gravitation, The phlogiston theory of combustion, quantum mechanics, Einstein's theory of relativity, and the theory of evolution.  +
It has often been argued that theories are best construed not as propositions but as models which are abstract set-theoretic entities. Importantly, on this model-theoretic or semantic view of theories, models do not contain propositions but are structures of non-linguistic elements.[[CITE_Suppe (1989)]] Whether this is indeed the case is to be established not in this metatheory but in an actual TSC (in collaboration with HSC). What is important from our perspective is that, even on this model-theoretic view, knowledge of the world depends crucially on formulating descriptive propositions.[[CITE_Chakravartty (2007)]] For something to become accepted as true or truthlike it must be expressible in descriptive propositions at least in principle. Often propositions are not explicitly formulated but are accepted tacitly. However, what matters is that in principle they too can be expressed as propositions. If something is not expressible as a proposition, then it cannot have a truth value and cannot be accepted or unaccepted as the best description of anything. Take an example of the Aristotelian-medieval model of the cosmos. When the medieval scientific community accepted this model, the community essentially accepted a tightly connected set of propositions, such as “the Earth is in the centre of the universe”, “the Moon, the Sun and all other planets are embedded in concentric crystalline spheres which revolve around the central Earth”, “all celestial bodies are made of element aether”, “aether is indestructible”, “aether has a natural tendency to revolve around the centre of the universe”, “all terrestrial bodies are made of the four terrestrial elements”, etc. In short, while models may as well play an important role in scientific practice, no part of these models can be actually accepted or rejected if it is not expressible in descriptive propositions. Thus, from the perspective of our project, it is safe to treat theories as collections of propositions.  +
Unlike Barseghyan's [[Theory (Barseghyan-2015)|original definition]] of ''theory'', this definition is deliberately ''neutral'' with respect to the descriptive/prescriptive divide. Thus, it allows for the existence of theories of various types and is not limited to descriptive theories.  +
TODO: Description here  +
This definition of the term makes it possible to apply the notion of theory acceptance to any subtype of [[Theory|theory]]. Unlike the previous definitions of the term, it doesn't imply any specific subtypes of theory, but explicitly states the relation between theories and [[Question|questions]] they attempt to answer.  +
This definition of ''theory acceptance'' makes it explicit that any accepted theory is a ''scientific'' theory. It assumes that the question of whether a theory is accepted is meaningless without the theory being scientific. The point here is that no scientist would ask whether they should accept a theory without believing, if only implicitly, that the theory is indeed scientific. Since only scientific theories have the potential to become accepted, and because only some of these do in fact become accepted, it follows that all of the accepted theories are scientific.  +
This definition expands on [[Theory Acceptance (Barseghyan-2015)|the original definition of the term]] proposed by [[Hakob Barseghyan|Barseghyan]] to ensure that the term is applicable not only to ''descriptive'' theories but also to ''normative'' theories. It assumes that descriptive theories attempt to provide descriptions of their respective objects, while normative theories attempt to prescribe a certain object, the latter being understood as a certain state of affairs.  +
According to this ''ontology'' of theory assessment outcomes, when a theory is assessed by a method, one of the three following outcomes can obtain:[[CITE_Barseghyan (2015)|p. 199]] * [[Outcome Accept|''Accept'']]: this assessment outcome prescribes that the theory must be accepted. * [[Outcome Not Accept|''Not Accept'']]: this assessment outcome prescribes that the theory must not be accepted. * [[Outcome Inconclusive|''Inconclusive'']]: this outcome allows for the the theory to be accepted but doesn't dictate so. While the first two assessment outcomes are ''conclusive'', the third outcome is ''inconclusive'', as it permits more than one possible course of action. Thus, in this view, a theory's assessment outcome is not necessarily ''conclusive''; an inconclusive outcome is also conceivable.  +
According to this ''ontology'' of theory assessment outcomes, when a theory is assessed by a method, one of the three following outcomes can obtain:[[CITE_Barseghyan (2015)|p. 199]] * [[Outcome Satisfied|''Satisfied'']]: the theory is deemed to conclusively meet the requirements of the method employed at the time. * [[Outcome Not Satisfied|''Not Satisfied'']]: the theory is deemed to conclusively not meet the requirements of the method employed at the time. * [[Outcome Inconclusive|''Inconclusive'']]: it is unclear whether or not the requirements of the method employed at the time are met. While the first two assessment outcomes are ''conclusive'', the third outcome is ''inconclusive'', as it permits more than one possible course of action. Thus, in this view, a theory's assessment outcome is not necessarily ''conclusive''; an inconclusive outcome is also conceivable. This ontology is assumed by [[The Second Law (Patton-Overgaard-Barseghyan-2017)|the second law]] of scientific change as formulated by Patton, Overgaard, and Barseghyan in 2017.  +
According to Oh, there is some historical evidence for theory decay.[[CITE_Oh (2021)]]  +
Theories are part of the process of scientific change.  +
According to this theory, theories are a subtype of epistemic element. Among other things, this assumes that epsitemic stances can be taken by epistemic agents towards theories.  +
A theory is said to be pursued if it is considered worthy of further development. [[CITE_Barseghyan (2015)|pp. 30-42]] An example is provided by mid-seventeenth century science. Throughout this period, the Aristotelian natural philosophy, with its geocentric cosmology, four elements, and four causes remained [[Theory Acceptance|accepted]] by the scientific community of Europe as evidenced, for example, by its central place in university curricula. The theories from this period that we are most familiar with from modern popular and professional literature, like Copernicus's heliocentric cosmology, and Galileo's theories of motion, were not accepted, but pursued theories. More generally these included the mechanical natural philosophy championed by a community which included [[Rene Descartes|Descartes]], Huygens, Boyle, and many others, and the magnetical natural philosophy, espoused by Gilbert, Kepler, Stevin, Wilkins and others. In our modern world, the major accepted physical theories include Einstein's relativity theory, quantum mechanics, and the standard model of particle physics. A variety of other theories are not accepted but are being pursued. These include various versions of string theory, and attempts to quantize general relativity, to create a quantum theory of gravity.[[CITE_Barseghyan (2015)|p. 40]] While a variety of unaccepted theories are typically pursued, accepted theories also typically continue to be pursued. General relativity has been the accepted theory of gravitation since roughly 1918. [[CITE_Barseghyan (2015)|p. 203]] The theory and its implications for astrophysics and cosmology continue to be pursued in a variety of ways. For example, in 2016, researchers at the Laser Interferometer Gravitational-Wave Observatory in the United States announced the first-ever direct detection of gravitational waves, thereby verifying a major prediction of the theory. [[CITE_Castelvecchi and Witze (2016)]][[CITE_Abbott et al. (2016)]]  +
Hakob Barseghyan’s lecture on pursued theories  +
According to '''the theory rejection theorem''', a [[Theory|theory]] becomes '''rejected''' only when other theories that are incompatible with the theory become accepted. Implicit in the theorem is the idea that each theory is assessed on an "individual basis by its compatibility with the propositions of the newly accepted theory".[[CITE_Barseghyan (2015)|p. 168]] If it turns out that a previously accepted theory is compatible with the newly accepted theory, it remain in the agent's mosaic. Barseghyan notes that, although we normally expect a theory to be replaced by another theory in the same "field" of inquiry, this is not necessarily the case. For example, he writes, "HSC knows several cases where an accepted theory became rejected simply because it wasn’t compatible with new accepted theories of some other fields".[[CITE_Barseghyan (2015)|p. 171]] Barseghyan summarizes '''the theory rejection theorem''' as such: <blockquote>In short, when the axioms of a theory are replaced by another theory, some of the theorems may nevertheless manage to stay in the mosaic, provided that they are compatible with the newly accepted theory. This is essentially what the ''theory rejection theorem'' tells us. Thus, if someday our currently accepted general relativity gets replaced by some new theory, the theories that followed from general relativity, such as the theory of black holes, may nevertheless manage to remain in the mosaic. [[CITE_Barseghyan (2015)|p. 171]] </blockquote>  +
By [[The First Law (Barseghyan-2015)|the first law]] for theories, an accepted theory remains accepted until it is replaced by other theories. By [[Compatibility Corollary (Fraser-Sarwar-2018)|the compatibility corollary]], the elements of the scientific mosaic are compatible with each other at any moment of time. It follows, therefore, that a theory can only become rejected when it is replaced by an incompatible theory or theories.[[CITE_Barseghyan (2015)|pp. 167-172]] [[CITE_Fraser and Sarwar (2018)|pp. 72-74]] [[File:Theory Rejection Theorem deduction (Barseghyan-Fraser-Sarwar-2018).png|623px|center||]]  +
Barseghyan presented the initial deduction (2015) of the theorem:[[CITE_Barseghyan (2015)|p. 167]] <blockquote> By the first law for theories, we know that an accepted theory can become rejected only when it is replaced in the mosaic by some other theory. But the law of compatibility doesn’t specify under what conditions this replacement takes place. For that we have to refer to the zeroth law, which states that at any moment of time the elements of the mosaic are mutually compatible. Suppose that a new theory meets the requirements of the time and becomes accepted into the mosaic. Question: what happens to the other theories of the mosaic? While some of the accepted theories may preserve their position in the mosaic, other theories may be rejected. The fate of an old accepted theory depends on whether it is compatible with the newly accepted theory. If it is compatible with the new accepted theory, it remains in the mosaic; the acceptance of the new theory doesn’t affect that old theory in any way. This is normally the case when the new theory comes as an addition to the theories that are already in the mosaic. For instance, when the new theory happens to be the first accepted theory of its domain, i.e. when there is a new field of science that has never had any accepted theories before). Yet, if an old theory is incompatible with the new one, the old theory becomes rejected, for otherwise the mosaic would contain mutually incompatible elements, which is forbidden by the law of compatibility. Therefore, there is only one scenario when a theory can no longer remain in the mosaic, i.e. when other theories which are incompatible with that theory become accepted.</blockquote> [[File:Theory-rejection-theorem.jpg|607px|center||]]  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
The rejection of ''theology proper'' (the study of God, his being, his attributes, and his works) from the scientific mosaic is a historical illustration of the ''Theory Rejection theorem'' and how accepted theories in one field may become rejected due to theories in other fields. In essence, theological propositions were rejected, but were not replaced with more theological propositions. It is difficult to track the exact dynamics of theology's "exile," but it is possible that these propositions were rejected and replaced with the thesis of ''agnosticism'', or that they were rejected due to the acceptance of ''evolutionary biology''. The "exile," as Barseghyan terms it, could have also been a very gradual process, and that the rejection of theological propositions came about for different reasons in different mosaics. Despite the difficulties in tracking down the exact dynamics of the gradual rejection of theology from the scientific mosaic, Barseghyan summarizes the evidence as such: "what must be appreciated here is that a theory can be replaced in the mosaic by theories pertaining to other fields of inquiry".[[CITE_Barseghyan (2015)|p. 172]]  +
Another example of the theory rejection theorem, specifically explaining that theories may not only be rejected because of the acceptance of new theories in their respective theories, is the case of ''natural astrology'' presented in [[Barseghyan (2015)]]. <blockquote>The exile of astrology from the mosaic is yet another example. It is well known that astrology was once a respected scientific discipline and its theories were part of the mosaic. Of course, not all of the astrology was accepted; it was the so-called ''natural astrology'' – the theory of celestial influences on physical phenomena of the terrestrial region – that was part of the Aristotelian-medieval mosaic. ... Although, for now, we cannot reconstruct all the details or even the approximate decade when the exile of natural astrology took place, one thing is clear: when the once-accepted theory of natural astrology became rejected, it wasn’t replaced by another theory of natural astrology.[[CITE_Barseghyan (2015)|p. 172]]</blockquote>  +
Barseghyan considers the case of ''plenism,'' "the view that there can be no empty space (i.e. no space absolutely devoid of matter)", as a key historical illustration of the '''Theory Rejection theorem''' in [[Barseghyan (2015)]]. <blockquote> Within the system of the Aristotelian-medieval natural philosophy, ''plenism'' was one of many theorems. Yet, when the Aristotelian natural philosophy was replaced by that of Descartes, ''plenism'' remained in the mosaic, for it was a theorem in the Cartesian system too. To appreciate this we have to consider the Aristotelian-medieval law of violent motion, which states that an object moves only if the applied force is greater than the resistance of the medium. In that case, according to the law, the velocity will be proportional to the force and inversely proportional to resistance. Otherwise the object won’t move; its velocity will be zero ... Taken as an axiom, this law has many interesting consequences. It follows from this law, that if there were no resistance the velocity of the object would be infinite. But this is absurd since nothing can move infinitely fast (for that would mean being at two places simultaneously). Therefore, there should always be some resistance, i.e. something that fills up the medium. Thus, we arrive at the conception of plenism ... There weren’t many elements of the Aristotelian-medieval mosaic that maintained their state within the Cartesian mosaic. The conception of plenism was among the few that survived through the transition. In the Cartesian system, plenism followed directly from the assumption that extension is the attribute of matter and that no attribute can exist independently from the substance in which it inheres ... In short, when the axioms of a theory are replaced by another theory, some of the theorems may nevertheless manage to stay in the mosaic, provided that they are compatible with the newly accepted theory. This is essentially what the theory rejection theorem tells us. Thus, if someday our currently accepted general relativity gets replaced by some new theory, the theories that followed from general relativity, such as the theory of black holes, may nevertheless manage to remain in the mosaic.[[CITE_Barseghyan (2015)|p. 168-170]] </blockquote>  
According to Pandey's new formulation of '''the theory rejection theorem''', a [[Theory|theory]] becomes '''rejected''' only when other [[Epistemic Element|epistemic elements]] that are incompatible with the theory become accepted. This formulation differs from Barseghyan's [[Theory Rejection theorem (Barseghyan-2015)|original formulation]] in that it allows a theory to be replaced by an epistemic element of ''any'' type, not just by other theories. In other respects, Pandey's formulation is similar to Barseghyan's. Implicit in both theorems is the idea that each theory is assessed on an "individual basis by its compatibility with the propositions of the newly accepted theory".[[CITE_Barseghyan (2015)|p. 168]] If it turns out that a previously accepted theory is compatible with the newly accepted theory, it remain in the agent's mosaic. Although we normally expect a theory to be replaced by another theory in the same "field" of inquiry, Barseghyan and Pandey both agree that this is not necessarily the case. For example, Barseghyan writes, "HSC knows several cases where an accepted theory became rejected simply because it wasn’t compatible with new accepted theories of some other fields".[[CITE_Barseghyan (2015)|p. 171]] Similarly, Pandey provides several examples of this phenomenon in ''Dilemma of The First Law''.[[CITE_Pandey (2023)]] Barseghyan summarizes '''the theory rejection theorem''' as such: <blockquote>In short, when the axioms of a theory are replaced by another theory, some of the theorems may nevertheless manage to stay in the mosaic, provided that they are compatible with the newly accepted theory. This is essentially what the ''theory rejection theorem'' tells us. Thus, if someday our currently accepted general relativity gets replaced by some new theory, the theories that followed from general relativity, such as the theory of black holes, may nevertheless manage to remain in the mosaic. [[CITE_Barseghyan (2015)|p. 171]] </blockquote>  +
Pandey makes a case that the first law and all of its corollaries are tautological.[[CITE_Pandey (2023)]]  +
Another example of the theory rejection theorem, specifically explaining that theories may not only be rejected because of the acceptance of new theories in their respective theories, is the case of ''natural astrology'' presented in [[Barseghyan (2015)]]. <blockquote>The exile of astrology from the mosaic is yet another example. It is well known that astrology was once a respected scientific discipline and its theories were part of the mosaic. Of course, not all of the astrology was accepted; it was the so-called ''natural astrology'' – the theory of celestial influences on physical phenomena of the terrestrial region – that was part of the Aristotelian-medieval mosaic. ... Although, for now, we cannot reconstruct all the details or even the approximate decade when the exile of natural astrology took place, one thing is clear: when the once-accepted theory of natural astrology became rejected, it wasn’t replaced by another theory of natural astrology.[[CITE_Barseghyan (2015)|p. 172]]</blockquote>  +
The rejection of ''theology proper'' (the study of God, his being, his attributes, and his works) from the scientific mosaic is a historical illustration of the ''Theory Rejection theorem'' and how accepted theories in one field may become rejected due to theories in other fields. In essence, theological propositions were rejected, but were not replaced with more theological propositions. It is difficult to track the exact dynamics of theology's "exile," but it is possible that these propositions were rejected and replaced with the thesis of ''agnosticism'', or that they were rejected due to the acceptance of ''evolutionary biology''. The "exile," as Barseghyan terms it, could have also been a very gradual process, and that the rejection of theological propositions came about for different reasons in different mosaics. Despite the difficulties in tracking down the exact dynamics of the gradual rejection of theology from the scientific mosaic, Barseghyan summarizes the evidence as such: "what must be appreciated here is that a theory can be replaced in the mosaic by theories pertaining to other fields of inquiry".[[CITE_Barseghyan (2015)|p. 172]]  +
Barseghyan considers the case of ''plenism,'' "the view that there can be no empty space (i.e. no space absolutely devoid of matter)", as a key historical illustration of the '''Theory Rejection theorem''' in [[Barseghyan (2015)]]. <blockquote> Within the system of the Aristotelian-medieval natural philosophy, ''plenism'' was one of many theorems. Yet, when the Aristotelian natural philosophy was replaced by that of Descartes, ''plenism'' remained in the mosaic, for it was a theorem in the Cartesian system too. To appreciate this we have to consider the Aristotelian-medieval law of violent motion, which states that an object moves only if the applied force is greater than the resistance of the medium. In that case, according to the law, the velocity will be proportional to the force and inversely proportional to resistance. Otherwise the object won’t move; its velocity will be zero ... Taken as an axiom, this law has many interesting consequences. It follows from this law, that if there were no resistance the velocity of the object would be infinite. But this is absurd since nothing can move infinitely fast (for that would mean being at two places simultaneously). Therefore, there should always be some resistance, i.e. something that fills up the medium. Thus, we arrive at the conception of plenism ... There weren’t many elements of the Aristotelian-medieval mosaic that maintained their state within the Cartesian mosaic. The conception of plenism was among the few that survived through the transition. In the Cartesian system, plenism followed directly from the assumption that extension is the attribute of matter and that no attribute can exist independently from the substance in which it inheres ... In short, when the axioms of a theory are replaced by another theory, some of the theorems may nevertheless manage to stay in the mosaic, provided that they are compatible with the newly accepted theory. This is essentially what the theory rejection theorem tells us. Thus, if someday our currently accepted general relativity gets replaced by some new theory, the theories that followed from general relativity, such as the theory of black holes, may nevertheless manage to remain in the mosaic.[[CITE_Barseghyan (2015)|p. 168-170]] </blockquote>  
TODO: Description here  +
An [[Epistemic Agent|epistemic agent]] is said to rely on an [[Epistemic Tool|epistemic tool]] ''iff'' there is a procedure through which the tool can provide an acceptable source of knowledge for answering some [[Question|question]] under the employed [[Method|method]] of that agent. Note that tool reliance, like [[Authority Delegation|authority delegation]], is reducible to the theories and methods of an agent.  +
U
The [[The Third Law|third law]] allows for two distinct scenarios of method employment. A [[Method|method]] may become employed because it follows strictly from accepted [[Theory|theories]] or employed methods, or it may the abstract requirements of some other employed method. This second scenario allows for creative ingenuity and depends on the technology of the times, therefore it may be fulfilled in many ways and allows underdeterminism [[CITE_Barseghyan (2015)|p. 198]]. [[File:Underdetermined-method-change.jpg|607px|center||]]  +
The process of [[Theory Acceptance|theory assessment]] under the TSC is underdetermined for two reasons. First, only [[Theory|theories]] that are constructed are available for assessment. Whether or not a theory is ever constructed is, at least partly a matter of creativity, and is therefore outside the scope of the TSC. Second, it is at least theoretically possible that a process of theory assessment will be inconclusive. This might be because the requirements of the method employed at the time might be vague (e.g. Aristotelian requirements of "intuition schooled by experience").[[CITE_Barseghyan (2015)|p. 199-200]] [[File:Underdetermined-theory-change.jpg|607px|center||]]  +