Property:Description

From Encyclopedia of Scientonomy
Jump to navigation Jump to search

This is a property of type Text.

The values of this property are to be stored by converting all instances of CiteRef:: to CITE_. E.g.:

{{#set: Description={{#replace:{{{Description|}}}|CiteRef::|CITE_}}|}}

Make sure to make the opposite replacement when retrieving the value. E.g.

{{#replace: {{#show: {{FULLPAGENAME}}|?Description}}|CITE_|CiteRef::}}

This is done to make it transclusion of descriptions on other pages. When the descriptions were stored as is, a simple {{#show: {{FULLPAGENAME}}|?Description}} would fail to properly display the semantic citations. Instead of a super superscript [1] with bibliographic info, it would show something like (Barseghyan (2015)).

Showing 100 pages using this property.
A
This definition aims to discern between accidental groups, i.e. the ones that don't have a collective intentionality, and actual communities, i.e. groups that do have collective intentionality.  +
By allowing the discussants to suggest alternative formulations in their comments, the workflow incentivizes commenting and aids consensus building. It often happens that the discussants agree that a little tweak in the original formulation would solve the problem. This can help speed up the advancement of our communal knowledge. In contrast, when the discussants are not permitted to reformulate the original formulations, the discussants have no other choice than to write a whole new paper arguing for what is otherwise a little tweak to the original formulation. Not only is this wasteful, but it also creates a bottleneck where a consensus formation is postponed due to bureaucratic restrictions. Thus, it is important to remove this bottleneck and allow the participants to alter original formulations. If a discussion yielded a new formulation, any such formulation should be clearly stated and added to the respective suggested modification, possibly under a separate heading (e.g. by distinguishing “Original Suggestion” from “Final Suggestion”). By default, the new formulation should bear the name of the author(s) of the original suggested modification, unless the original author(s) decides to give credit to those who significantly contributed to the reformulation. This should be decided collegially by the author, the commentators, and the editors on a case-by-case basis.  +
Some facts ''ought'' to be relevant to the [[Theory Acceptance|assessment of a theory]] because the content of the theory itself implies their relevance, and others ought to be relevant simply by definition. When assessing a theory concerning scientific change, relevant facts that ought necessarily to be considered include questions pertinent to scientific change processes. For example: What [[Theory|theories]] and [[Method|methods]] were part of the [[Scientific Mosaic|scientific mosaic]] of the community in question, both before and after the instance of [[Scientific Change|scientific change]]? What modifications were proposed and what parts of the mosaic did they intend to replace? Which of these modifications became accepted into the mosaic, and how? Relevant questions will depend on accepted views about the [[Scope of Scientonomy|scope of scientonomy]]. For example, if scientonomy deals with scientific change [[Scope of Scientonomy - Individual and Social|at the level of scientific communities]], then facts about the accepted views of communities ought to be relevant, and the views of particular individuals ought not. If scientonomy [[Scope of Scientonomy - Construction and Appraisal|deals only with theory appraisal]] and not with theory construction, then it follows that facts concerning the former, but not the latter, ought to be considered. Relevant facts will also depend on the content of the mosaic at the time in question. For example, it is anachronistic to speak of religious constraints on science in the seventeenth century since, at that time, religion and natural philosophy were not regarded as separate domains of knowledge, but as part of the same mosaic.[[CITE_Barseghyan (2015)|p. 111]]  +
The theorem states that the employment of a method is not necessarily simultaneous with the acceptance of a new theory. Being a direct logical consequence of [[The Third Law|the third law]], the theorem highlights the fact that some methods are a result of the implementation of some abstract requirements of other methods. In this way, a new method can be devised as a means of resolving a particular creative gap, and subsequently become employed long after the acceptance of the theory that led to the employment of the abstract method.  +
Barseghyan presents a historical example showing that scientific change is not necessarily a ''synchronous'' process. <blockquote> When it comes to acquiring data about such minute objects as molecules or living cells, the unaided human eye is virtually useless. This proposition yields, among other things, an abstract requirement that, when counting the number of cells, the resulting value is acceptable only if it is obtained with an “aided” eye. This abstract requirement has been implemented in a variety of different ways. First, there is the counting chamber method where the cells are placed in a counting chamber – a microscope slide with a special sink – and the number of cells is counted manually under a microscope. There is also the plating method where the cells are distributed on a plate with a growth medium and each cell gives rise to a single colony. The number of cells is then deduced from the number of colonies. In addition, there is the flow cytometry method where the cells are hit by a laser beam one by one and the number of cells is counted by means of detecting the light reflected by the cells. Finally, there is the spectrophotometry method where the number of cells is obtained by means of measuring the turbidity in a spectrophotometer.[[CITE_Barseghyan (2015)|pp. 151-152]]</blockquote> These are three different implementations of the ''same'' abstract requirement, which were, importantly, all devised and employed at different times.  +
One key corollary of the third law is put forth in Barseghyan (2015). "Scientific change is not necessarily a ''synchronous process'': changes in theories are not necessarily simultaneous with changes in methods".[[CITE_Barseghyan (2015)|pp. 150]] <blockquote>Suppose a new theory becomes accepted and some new abstract constraints become imposed. In this case, we can say that the acceptance of a theory resulted in the employment of a new method and the employment of a new method was synchronous with the acceptance of a new theory. But we also know that there is the second scenario of method employment, where a method implements some abstract requirements of other employed methods. In this scenario, there is a certain creative gap between abstract requirements that follow directly from accepted theories and methods that implement these abstract requirements. Devising a new method that would implement abstract requirements takes a fair amount of ingenuity and, therefore, there are no guarantees that these abstract requirements will be immediately followed by a new concrete method. In short, changes in methods are not necessarily simultaneous with changes in theories.[[CITE_Barseghyan (2015)|pp. 150-151]]</blockquote>  +
If we consider the fact that scientific research is so specialized that no single research lab can account for all accepted theories in their discipline, we quickly recognize that there exists some form of distribution of labour among subcommunities. Authority delegation is an attempt to capture that distribution of labour, in scientonomic terms. What this definition of authority delegation jointly expresses is the acceptance of a theory and the associated employment of a method. In any instance of authority delegation, the delegating community accepts that the community delegated to is an expert in some field. It follows from accepting that expertise that the same delegating community will simply employ a method to accept whatever the expert community says to accept. Importantly, the method employed by the delegating community is distinct from that employed by the community delegated to; it would be misleading to suggest that the delegating community employs the same method as the community delegated to. This definition is careful to capture such particularities, as the definition merely expressed a new theory accepted and method employed by the delegating community. For a simple example, consider a relation of authority delegation between physicists and biologists. A community of physicists can be said to be delegating authority over the life sciences to a community of biologists, so long as the community of physicists ''both'' accepts that biologists are experts in the life sciences ''and'' will accept a theory on the life sciences if told so by the biologists.  +
Authority delegation explained by Gregory Rupik  +
The definition tweaks the [[Authority Delegation (Overgaard-Loiselle-2016)|original definition]] of the term by [[Nicholas Overgaard|Overgaard]] and [[Mirka Loiselle|Loiselle]] to ensure that the relationship of authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
B
Hakob Barseghyan presenting the redrafted ontology  +
There is only one type of agents that can bear a mosaic - community.[[CITE_Barseghyan (2015)|pp. 43-52]] As for ''individual'' epistemic agents, their status and role in the process of scientific change is unclear; thus, the notion of an individual bearing a mosaic is problematic.  +
C
One potential way of addressing the problem of closure mechanism is by introducing a “countdown” mechanism, where the community is given a three-month (90-day) discussion period for commenting on a suggested modification and, if no objections raised during this period, the proposed modification becomes accepted by default. According to Shaw and Barseghyan: <blockquote>This allows for the possibility of inclusive debate without stalling on the development of our theory of scientific change. One disadvantage is that it doesn’t address the worry about masked objections raised in the previous section – people still may not explicitly dissent.[[CITE_Shaw and Barseghyan (2019)|p. 11]]</blockquote>  +
To ensure that a suggested modification is properly evaluated and a verdict is reached, the community should be given a certain time period to discuss the modification, after which a communal vote should be taken. This vote should be offered to all members of the community, who should be given a short timeframe to decide. In principle, this strategy should contribute to the transparency and inclusivity of the workflow by involving larger amounts of the community into the workflow. Since voting doesn't require a great deal of effort, this approach also addresses the problem of lack of commenting. As stated by Shaw and Barseghyan: <blockquote>In a sense, this proposal would look like an election where there are two main phases. In the first phase, arguments will be made but no particular line of action will be taken. In the second phase, the vote will take place and a decision will be made by the will of the people. In addition, this strategy has the advantage of overcoming the problem of masked objections. People can vote anonymously, expressing their view and approval or dissatisfaction with a proposed modification, without fear of any sort of reprisal. One disadvantage is that a vote is not always grounded in good reasons. Community members may ignore important considerations and vote without being informed on the topic, thus leading to a less-than-ideal consensus. As we are witnessing in the world around us, the will of the people does not always pick out the best choice.[[CITE_Shaw and Barseghyan (2019)|p. 11]]</blockquote>  +
A [[Group|group]] that has a collective intentionality.  +
When dealing with a community, it might be useful to analyze it in terms of its constituent subcommunities (e.g. the community of particle physicists within the community of physicists). But such an analysis is based on an assumption that a community can consist of other communities, i.e. subcommunities. This assumption is by no means trivial; indeed, there might exist a view that each community is to be treated separately as one indivisible whole and, thus, any talk of its constituents is meaningless. According to Overgaard, communities can be said to be consisting of other communities.[[CITE_Overgaard (2017)|p. 58]] Thus, there is such a thing as a subcommunity, i.e. a community that is part of a larger community.  +
This definition of ''compatibility'' captures the main gist of the notion as it was originally intended by [[Rory Harder|Harder]] and [[Hakob Barseghyan|Barseghyan]] - the idea that two elements are compatible when they can coexist within the same mosaic.  +
The corollary is meant to restate the content of [[Rory Harder|Harder]]'s [[The Zeroth Law (Harder-2015)|the zeroth law]] of scientific change. Since the corollary follows deductively from the definition of [[Compatibility (Fraser-Sarwar-2018)|''compatibility'']], it highlights that the zeroth law as it was formulated by Harder is tautologous. Since the corollary covers the same idea as the zeroth law, all the theorems that were thought to be deducible by means of the zeroth law (e.g. [[Theory Rejection theorem (Barseghyan-2015)|the theory rejection theorem]] or [[Method Rejection theorem (Barseghyan-2015)|the method rejection theorem]]) can now be considered deducible by means of the corollary.  +
Like [[Demarcation Criteria|demarcation]] and [[Acceptance Criteria|acceptance criteria]], compatibility criteria can be part of a community's employed method. The community employs these criteria to determine whether two theories are mutually compatible or incompatible, i.e. whether they can be simultaneously part of the community's mosaic. Different communities can have different compatibility criteria. While some communities may opt to employ the logical law of noncontradiction as their criterion of compatibility, other communities may be more tolerant towards logical inconsistencies. According to Barseghyan, the fact that these days scientists "often simultaneously accept theories which strictly speaking logically contradict each other is a good indication that the actual criteria of compatibility employed by the scientific community might be quite different from the classical logical law of noncontradiction".[[CITE_Barseghyan (2015)|p. 11]] For example, this is apparent in the case of general relativity vs. quantum physics where both theories are accepted as the best available descriptions of their respective domains (i.e. they are considered ''compatible''), but are known to be in conflict when applied simultaneously to such objects as black holes.  +
Barseghyan presents the following hypothetical-historical example when compatibility criteria are introduced in [[Barseghyan (2015)]]. <blockquote>It can be argued that our contemporary criteria of compatibility have not always been employed. Consider the case of the reconciliation of the Aristotelian natural philosophy and metaphysics with Catholic theology. As soon as most works of Aristotle and its Muslim commentators were translated into Latin (circa 1200), it became obvious that some propositions of Aristotle’s original system were inconsistent with several dogmas of the then-accepted Catholic theology. Take, for instance, the Aristotelian conceptions of determinism, the eternity of the cosmos, and the mortality of the individual soul. Evidently, these conceptions were in direct conflict with the accepted Catholic doctrines of God’s omnipotence and free will, of creation, and of the immortality of the individual human soul.[[CITE_Lindberg (2007)|p. 228–253]]. Moreover, some of the passages of Scripture, when taken literally, appeared to be in conflict with the propositions of the Aristotelian natural philosophy. In particular, Scripture seemed to imply that the Earth is flat (e.g. Daniel 4:10-11; Mathew 4:8; Revelation 7:1), which was in conflict with the Aristotelian view that the Earth is spherical. It is no surprise, therefore, that many of the propositions of the Aristotelian natural philosophy were condemned on several occasions during the 13th century.[[CITE_Lindberg (2007)|p.226-249]]. To resolve the conflict, Albert the Great, Thomas Aquinas and others modified both the Aristotelian natural philosophy and the biblical descriptions of natural phenomena to make them consistent with each other. On the one hand, they stipulated that the laws of the Aristotelian natural philosophy describe the natural course of events only insofar as they do not limit God’s omnipotence, for God can violate any laws if he so desires. Similarly, they modified Aristotle’s determinism by adding that the future of the cosmos is determined by its present only insofar as it is not affected by free will or divine miracles. Similar modifications were introduced to many other Aristotelian propositions. On the other hand, it was also made clear that biblical descriptions of cosmological and physical phenomena are not to be taken literally, for Scripture often employs a simple language in order to be accessible to common folk. Thus, where possible, literal interpretations of Scripture were supposed to be replaced by interpretations based on the Aristotelian natural philosophy.[[CITE_Grant (2004)|p.220-224, 245]] Importantly, it is only after this reconciliation that the modified Aristotelian-medieval natural philosophy became accepted by the community.[[CITE_Lindberg (2007)|p.250-1]] This and similar examples seem to be suggesting that the compatibility criteria employed by the medieval scientific community were quite different from those employed nowadays. While apparently we are inconsistency-tolerant (at least when dealing with theories in empirical science), the medieval scientific community was inconsistency-intolerant in the sense that they wouldn’t tolerate any open inconsistencies in the mosaic.[[CITE_Barseghyan (2015)|p.160-161]]</blockquote>  
Like [[Demarcation Criteria|demarcation]] and [[Acceptance Criteria|acceptance criteria]], compatibility criteria can be part of an epistemic agent's employed method. An epistemic agent employs these criteria to determine whether two elements (e.g. methods, theories, questions) are mutually compatible or incompatible, i.e. whether they can be simultaneously part of the agent's mosaic. In principle, these criteria can be employed to determine the compatibility of elements present in the mosaic, as well as those outside of it (e.g. scientists often think about whether a proposed theory is compatible with the theories actually accepted at the time). [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]] point out that [[Hakob Barseghyan|Barseghyan]]'s [[Compatibility Criteria (Barseghyan-2015)|original definition]] of the term "excludes a simple point that is assumed elsewhere in scientonomy: elements other than theories (i.e. methods and questions) may be compatible or incompatible with other elements (which, again, need not be theories)".[[CITE_Fraser and Sarwar (2018)|p. 72]] To fix this omission, Fraser and Sarwar "suggest that the word ‘theories’ be changed to ‘elements’ to account for the fact that the compatibility criteria apply to theories, methods, and questions alike".[[CITE_Fraser and Sarwar (2018)|p. 72]] Different communities can have different compatibility criteria. While some communities may opt to employ the logical law of noncontradiction as their criterion of compatibility, other communities may be more tolerant towards logical inconsistencies. According to Barseghyan, the fact that these days scientists "often simultaneously accept theories which strictly speaking logically contradict each other is a good indication that the actual criteria of compatibility employed by the scientific community might be quite different from the classical logical law of noncontradiction".[[CITE_Barseghyan (2015)|p. 11]] For example, this is apparent in the case of general relativity vs. quantum physics where both theories are accepted as the best available descriptions of their respective domains (i.e. they are considered ''compatible''), but are known to be in conflict when applied simultaneously to such objects as black holes.  
According to [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]], "[[Compatibility (Fraser-Sarwar-2018)|compatibility]] is a distinct epistemic stance that agents can take towards elements".[[CITE_Fraser and Sarwar (2018)|p.70]] They show this by arguing that it is possible to take the stance of compatibility towards a pair of elements without taking any of the other stances towards these elements. Thus, compatibility is distinct from [[Theory Acceptance|acceptance]], since two elements need not be in the same mosaic, or even accepted by any agent to be considered, in principle, compatible. For example, an epistemic agent may consider Ptolemaic astrology compatible with Aristotelian natural philosophy without accepting either Ptolemaic astrology or Aristotelian natural philosophy. Compatibility is also different from [[Theory Use|use]], since a pair of theories can be considered compatible regardless of whether any of them is considered useful. For instance, one can consider quantum mechanics and evolutionary biology compatible, while finding only the former useful. Finally, compatibility is also distinct from [[Theory Pursuit|pursuit]], since an agent can consider a pair of theories compatible with or without pursuing either. An agent, for instance, may find two alternative quantum theories pursuitworthy while clearly realizing that the two are incompatible.  +
<blockquote>The traditional version of comparativism holds that when two theories are compared it doesn’t make any difference which of the two is currently accepted. In reality, however, the starting point for every theory assessment is the current state of the mosaic. Every new theory is basically an attempt to modify the mosaic by inserting some new elements into the mosaic and, possibly, by removing some old elements from the mosaic. Therefore, what gets decided in actual theory assessment is whether a proposed modification is to be accepted. In other words, we judge two competing theories not in a vacuum, as the traditional version of ''comparativism'' suggests, but only in the context of a specific mosaic. It is this version of the comparativist view that is implicit in the laws of scientific change.[[CITE_Barseghyan (2015)|p. 184]] </blockquote> Theory assessment is an assessment of a proposed modification of the [[Scientific Mosaic|scientific mosaic]] by the [[Method|method]] employed at the time. By [[The First Law|the first law]], a [[Theory|theory]] already in the mosaic is no longer appraised. By [[The Second Law|the second law]], it is only assessed when it first enters the mosaic (see the detailed deduction below).[[CITE_Barseghyan (2015)|pp. 185-196]] Barseghyan does note the following: "if, for whatever reason, we need to compare two competing theories disregarding the current state of the mosaic, we are free to do so, but we have to understand that in actual scientific practice such abstract comparisons play no role whatsoever. Any theory assessment always takes into account the current state of the mosaic".[[CITE_Barseghyan (2015)|pp. 186]]  +
Barseghyan presents the following description of the deduction of the ''contextual appraisal theorem'': <blockquote> By the second law, in actual theory assessment a contender theory is assessed by the method employed at the time ... In addition, it follows from the first law for theories that a theory is assessed only if it attempts to enter into the mosaic; once in the mosaic, the theory no longer needs any further appraisal. In this sense, the accepted theory and the contender theory are never on equal footing, for it is up to the contender theory to show that it deserves to become accepted. In order to replace the accepted theory in the mosaic, the contender theory must be declared superior by the current method; to be “as good as” the accepted theory is not sufficient.</blockquote> [[File:Contextual-appraisal.jpg|607px|center||]]  +
Barseghyan (2015) provides another rich illustration for the Contextual Appraisal theorem with "the famous Eucharist episode which took place in the second half of the 17th century," which is a subtler important piece of the already difficult scientonomic case of the 18th-century transition from Cartesian to Newtonian natural philosophy.[[CITE_Barseghyan (2015)|p. 190]] Barseghyan describes the episode as follows: <blockquote>This episode has been often portrayed as a clear illustration of how religion affects science. In particular, the episode has been presented as though the acceptance of Cartesianism in Paris was delayed due to the role played by the Catholic Church. It is a historical fact that Descartes’s natural philosophy was harshly criticized by the Church. In 1663, his works were even placed on the Index of Prohibited Books and in 1671 his conception was officially banned from schools. Thus, at first sight, it may appear as though the acceptance of the Cartesian science in Paris was indeed hindered by religion. Yet, upon closer scrutiny, it becomes obvious that this interpretation is too superficial. When Descartes constructed his natural philosophy, it soon turned out that it had a very troubling consequence: it wasn’t readily reconcilable with the doctrine of transubstantiation accepted by the Aristotelian-Catholic scientific community of Paris. The idea of transubstantiation was proposed by Thomas Aquinas in his Summa Theologiae as an explanation of one of the Christian dogmas – namely, that of the Real Presence which states that, in the Eucharist, Christ is really present under the appearances of the bread and wine (i.e. literally, rather than metaphorically or symbolically). In his explanation of Real Presence, Aquinas employed Aristotelian concepts of substance and accident. In particular, he stated that in the Eucharist the consecration of bread and wine effects the change of the whole substance of the bread into the substance of Christ’s body and of the whole substance of the wine into the substance of his blood. Thus, what happens in the Eucharist is transubstantiation – a transition from one substance to another. As for the accidents of the bread and wine such as their taste, color, smell etc., Aquinas held that they remain intact, for transubstantiation doesn’t affect them. The doctrine of transubstantiation soon became the accepted Catholic explanation of the Real Presence. The problem was that Descartes’s theory of matter didn’t provide any mechanism similar to that stated in the doctrine of transubstantiation. To be more precise, it followed from Descartes’s original theory that transubstantiation was impossible. Recall that, according to Descartes, the only principal attribute of matter is extension: to be a material object amounts to occupying some space. It follows from this basic axiom that accidents such as smell, color, or taste are effects produced upon our senses by the configuration and motion of material particles. In other words, we simply cannot perceive the accidents of bread and wine unless there is bread and wine in front of us. What makes bread what it is, what constitutes its substance (to use Aristotle’s terms) is a specific combination of material particles; and the same goes for wine. Thus, when the substance of bread changes into the substance of Christ’s body, in the Cartesian theory, it means that some combination of particles which constitutes the bread changes into another combination of particles which constitutes Christ’s body. The key point here is that, in Descartes’s theory, it is impossible for Christ’s body to have the appearance of bread, since the appearance is merely an effect produced by that specific combination of particles upon our senses; Christ’s body and blood simply cannot produce the accidents of bread and wine. Obviously, on this point, Descartes’s theory was in conflict with the doctrine of transubstantiation. This conflict became the focal point of criticism of Descartes’s theory. To a 21st-century reader used to a clear-cut distinction between science and religion this may seem a purely religious matter. Yet, in the second half of the 17th century, this was precisely a scientific concern. The crucial point is that back then theology wasn’t separate from other scientific disciplines: the scientific mosaic of the time included many theological propositions such as “God exists”, “God is omnipotent”, or “God created the world”. These propositions where part of the mosaic just as any other accepted proposition. If we could visit 17th-century Paris, we would see that the dogma of Real Presence and the doctrine of transubstantiation weren’t something foreign to the scientific mosaic of the time – they were accepted parts of it alongside such propositions as “the Earth is spherical”, “there are four terrestrial elements”, “there are four bodily fluids” and so on. Thus, Descartes’s theory was in conflict not with some “irrelevant religious views” but with a key element of the scientific mosaic of the time, the doctrine of transubstantiation. More precisely, the problem was that back then no theory was allowed to be in conflict with the accepted theological propositions. This latter requirement was part of the method of the time. The requirement strictly followed from the then-accepted belief that theological propositions are infallible. Yet, eventually, the Cartesian natural philosophy did become accepted in Paris. If the laws of scientific change are correct, it could become accepted only with a special patch that would reconcile it with the doctrine of transubstantiation. It is not clear as to what exactly this patch was. To be sure, there is vast literature on different Cartesian solutions of the problem: the solutions proposed by Descartes, Desgabets, and Arnauld are all well known.359 However, I have failed to find a single historical narrative revealing which of these patches became accepted in the mosaic alongside the Cartesian natural philosophy circa 1700.360 Based on the available data, I can only hypothesize that the accepted patch was the one proposed by Arnauld in 1671. According to Arnauld’s solution, the Cartesian natural philosophy concerns only the natural course of events. However, since God is omnipotent, he is able to alter the natural course of events. Thus, he can turn bread and wine into the body and blood of Christ even if that is not something that can be expected naturally. Moreover, since our capacity of reason is limited, God can do things that are beyond our reason. Therefore, it is possible for Christ to be really present under the accidents of the bread and wine without our being able to comprehend the mechanism of that presence.361 One reason why I think that this could be the accepted patch is that a similar solution was also proposed by both Régis and Malebranche.362 The latter basically held that what happens in the Eucharist is a miracle and is not to be explicated in philosophical terms. In this context, the position of Malebranche is especially important for, at the time, his Recherche de la Vérité was among the main Cartesian texts studied at the University of Paris.363 Again, I cannot be sure that the accepted patch was exactly that of Arnauld and Malebranche; only closer scrutiny of the curriculum of Paris University in 1700-1740 as well as other relevant sources can settle this issue. Yet, the laws of scientific change tell us that there should be one patch or another – the Cartesian natural philosophy couldn’t have been accepted without one. In short, initially the Cartesian theory didn’t satisfy the requirements implicit in the mosaic of the time, namely it was in conflict with one of those propositions which were not supposed to be denied. Thus, the acceptance of Descartes’s theory was hindered not because “dogmatic clergy” didn’t like it on some mysterious religious grounds, but because initially it didn’t satisfy the requirements of the time. This point will become clear if we turn our attention to the scientific mosaic of Cambridge of the same time period. Circa 1660, the mosaics of Paris and Cambridge were similar in many respects. For one, they both included all the elements of the Aristotelian-medieval natural philosophy. In addition, they shared the basic Christian dogmas, such as the dogma of Real Presence. Yet, they were different in one important respect: whereas the mosaic of Paris included the propositions of Catholic theology, the mosaic of Cambridge included the propositions of Anglican theology. Namely, the Cambridge mosaic didn’t include the doctrine of transubstantiation. In that mosaic, the Cartesian theory was only incompatible with the Aristotelian-medieval natural philosophy which it aimed to replace. This difference proved crucial. Whereas reconciling the Cartesian natural philosophy with the doctrine of transubstantiation was a challenging task, reconciling it with the dogma of Real Presence wasn’t difficult. One such reconciliation was suggested by Descartes himself and was developed by Desgabets. The idea was that the bread becomes the body of Christ by virtue of being united with the soul of Christ, while the material particles of the bread remain intact. For the Catholic, this solution was unacceptable, for it denied the doctrine of transubstantiation and, therefore, was a heresy. Yet, for the Anglican, this solution could be acceptable, since the doctrine of transubstantiation wasn’t part of the Anglican mosaic. Thus, whereas the Catholic was faced with a seemingly insurmountable problem of reconciling the Cartesian natural philosophy with the doctrine of transubstantiation, the Anglican didn’t have that problem. This explains why the whole Eucharist case was almost exclusively a Catholic affair. This episode illustrates the main point of the contextual appraisal theorem: a theory is assessed only in the context of a specific mosaic and the outcome of the assessment depends on the state of the mosaic of the time.[[CITE_Barseghyan (2015)|p. 190-196]]</blockquote>  
Barseghyan's deduction of the ''contextual appraisal theorem'' can be further understood through a brief example. Consider a situation wherein "the proponents of some alternative quantum theory argue that the currently accepted theory is no better than their own quantum theory".[[CITE_Barseghyan (2015)|p. 185]] But, importantly, we notice that they are taking theory assessment out of its ''historical context''! "Particularly," Barseghyan comments, "they ignore the phenomenon of scientific inertia – they ignore that, in order to remain in the mosaic, the accepted theory doesn’t need to do anything (by ''the first law'' for theories) and that it is their obligation to show that their contender theory is better (by ''the second law'')".[[CITE_Barseghyan (2015)|p. 185]]  +
The depiction of Galileo as a hero, standing up against church authorities to present his "clearly superior" position, is well-known.[[CITE_Barseghyan (2015)|p. 187]] However, as Barseghyan rightly notes, it fails to take the contemporaneous ''scientific mosaic'' of Galileo's community into account. The traditional account, placing both theories aganist each other in a vacuum, "failed to appreciate both that theory assessment is an assessment of a proposed modification and that a theory is assessed by the method employed at the time. Once we focus our attention on the state of the scientific mosaic of the time," though, "it becomes obvious that the scientific community of the time simply couldn’t have acted differently".[[CITE_Barseghyan (2015)|p. 188]] Let's consider the ''scientific mosaic'' circa the 1610s. It consisted of many interconnected Aristotelian-medieval theories, including ''geocentrism,'' which "was a deductive consequence of the Aristotelian law of natural motion and the theory of elements".[[CITE_Barseghyan (2015)|p. 188]] So, "it was impossible to simply cut geocentrism out of the mosaic and replace it with heliocentrism – the whole Aristotelian theory of elements would have to be rejected as well. And it was only made more difficult because "the theory of elements itself was tightly connected with many other parts of the mosaic," such as the ''possibility of transformation of elements'' and the medical theory of the time (four humours).[[CITE_Barseghyan (2015)|p. 189]] "In short," summarizes Barseghyan, "in order to make the rejection of geocentrism possible, a whole array of other elements of the Aristotelian-medieval mosaic would have to be rejected as well".[[CITE_Barseghyan (2015)|p. 189]] Now by ''the first law for theories'' and ''the theory rejection theorem'', "only the acceptance of an alternative set of theories could defeat the theories of the Aristotelian-medieval mosaic".[[CITE_Barseghyan (2015)|p. 189]] "Unfortunately for Galileo," concludes Barseghyan, "at the time there was no acceptable contender theory comparable in scope with the theories of the Aristotelian-medieval mosaic ... Galileo didn’t have an acceptable replacement for all the elements of the mosaic that had to be rejected together with geocentrism ... The traditional interpretation of this historical episode failed to appreciate this important point and, instead, preferred to blame the dogmatism of the clergy".[[CITE_Barseghyan (2015)|p. 189]] Another key problem with the typical presentation of this episode is its assessment, "not by the implicit requirements of the time, but by the requirements of the hypothetico-deductive method, which became actually employed a whole century after the episode took place. Namely, Galileo was said to have shown the superiority of the Copernican heliocentrism by confirming some of its novel predictions," which, by the traditional account, was considered "a clear-cut indication of the superiority of the Copernican hypothesis".[[CITE_Barseghyan (2015)|p. 189]] Yet, Barseghyan's more careful study of the episode reveals the following: "the requirements of hypothetico-deductivism had little in common with the actual expectations of the community of the time. Although the task of reconstructing the late Aristotelian-medieval method of natural philosophy is quite challenging and may take a considerable amount of labour, one thing is clear: the requirement of confirmed novel predictions was not among implicit expectations of the community of the time. Back then, theories simply didn’t get assessed by their confirmed novel predictions".[[CITE_Barseghyan (2015)|pp. 189-90]] And we note that important point becomes apparent through the ''contextual appraisal theorem''.  
The core questions of a [[Discipline| discipline]] are those general questions that are essential to a discipline, having the power to define it and establish its boundaries within a hierarchy of questions. They are identified as such in the discipline's [[Delineating Theory| delineating theory]].[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic| scientific mosaic]] consists of [[Theory| theories]] and [[Question| questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] Questions form hierarchies in which more specific questions are [[Subquestion| subquestions]] of broader questions. Theories enter into this hierarchy as well since questions presuppose theories, and theories are answers to questions. It is the position of core questions within such hierarchies that confer upon them the power to define and establish the boundaries of a discipline by indicating which questions and theories are included. For example, the question 'how did living things originate as a result of evolution?' is a core question of evolutionary biology.  +
A core theory of a [[Discipline| discipline]] is a [[Theory| theory]] presupposed by the discipline's [[Core Question| core questions]].[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic| scientific mosaic]] consists of [[Theory| theories]] and [[Question| questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] Questions constitute hierarchies where more specific questions are [[Subquestion| subquestions]] of broader questions. Within this hierarchy, certain general questions play a special role as core questions. These questions are essential to a discipline, and have the power to identify it and determine its boundaries. For example, a core question of evolutionary biology would be 'how did living species originate as a result of evolution?'. Questions always presuppose theories, which endow them with semantic content. Those presupposed by a discipline's core questions, are that discipline's core theories. For our example, the theory in question would be The neo-Darwinian theory of evolution by natural selection.  +
D
This somewhat simplistic definition of ''definition'' is meant to highlight that definitions are themselves theories (statements, propositions). As a result, any [[Epistemic Stances|stance]] that can be taken by [[Epistemic Agent|epistemic agents]] towards theories can also be taken towards definitions.  +
According to Barseghyan, definitions are an integral part of the process of scientific change.[[CITE_Barseghyan (2018)]]  +
According to Barseghyan, definitions are essentially a species of theories.  +
Nicholas Overgaard explains the topic  +
One can specify a [[Discipline|discipline]] in terms of a set of its [[Core Question| core questions]]. A delineating theory is a second-order [[Theory|theory]] identifying this set of core questions, and allowing it to exist as an [[Epistemic Element| epistemic element]] within the [[Scientific Mosaic|mosaic]].[[CITE_Patton and Al-Zayadi (2021)]] For example, the delineating theory of modern physics might identify 'How do matter and energy behave?' as a core question of modern physics.  +
[[The Law of Theory Demarcation (Sarwar-Fraser-2018)|The law of theory demarcation]] states that a theory is deemed as scientific only if it satisfies the demarcation criteria employed by the epistemic community at the time. [[Theory Acceptance (Fraser-Sarwar-2018)|The definition of theory acceptance]] suggested by [[Patrick Fraser|Fraser]] and [[Ameer Sarwar|Sarwar]] states that an accepted theory is a ''scientific'' theory that is taken to be the best available description or prescription of its object of study. It follows from these two premises that whenever a theory is accepted, it must also have satisfied the demarcation criteria of the time. After all, if it did not, then the definition of theory acceptance is contradicted. Therefore, if the definition of theory acceptance and the law of demarcation criteria are accepted, then it must also be accepted that accepted theories satisfy the criteria of demarcation. This demarcation-acceptance synchronism is presented somewhat more formally in the following diagram: [[File:Demarcation-Acceptance_Synchronism_theorem_deduction_(Fraser-Sarwar-2018).png|761px|center||]]  +
Hakob Barseghyan's lecture on Cartesian Worldview  +
According to [[Zoe Sebastien|Sebastien]]'s definition of the term, descriptive theories aim at ''describing'' a certain object under study, where ''describe'' is understood in the broad sense and includes ''explain'', ''predict'', etc. Thus, the term encompasses theories that attempt to describe a certain phenomenon, process, or state of affairs in the past, present, or future. All of the following propositions would qualify as ''descriptive'': * The acceleration of an object as produced by a net force is directly proportional to the magnitude of the net force, in the same direction as the net force, and inversely proportional to the mass of the object. (''A general description of a phenomenon''.) * Paris is the capital of France. (''A description of a current state of affairs''.) * Augustus was the first emperor of the Roman Empire. (''A description of a past state of affairs''.) * Halley's comet will next appear in the night sky in the year 2062. (''A description of a future event, i.e. a prediction''.) Typically, most propositions produced by both empirical and formal sciences would fall under the category of ''descriptive theory''. Among others, this includes substantive propositions of physics, chemistry, biology, psychology, sociology, and economics, as well those of historical sciences. Excluded from this category are [[Normative Theory|normative propositions]], such as those of methodology, ethics, or aesthetics.  +
According to Barseghyan, many theories attempt to describe something. Thus, there are descriptive theories.[[CITE_Barseghyan (2015)|p. 5]]  +
A discipline ''A'' is characterized by a non-empty set of [[Core Question| core questions]] ''Q<sub>CA</sub>'' and a [[Delineating Theory| delineating theory]] stating that ''Q<sub>CA</sub>'' are the core questions of the discipline.[[CITE_Patton and Al-Zayadi (2021)]] The [[Scientific Mosaic|scientific mosaic]] consists of [[Theory|theories]] and [[Question|questions]].[[CITE_Barseghyan (2015)]][[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Sebastien (2016)]] As a whole, a discipline ''A'' consists of a set of accepted questions ''Q<sub>A</sub>'', and the theories which provide answers to those questions, or which those questions presuppose. [[CITE_Patton and Al-Zayadi (2021)]] Questions form hierarchies, with more specific questions being [[Subquestion| subquestions]] of more general questions. Theories find a place in these hierarchies, since each theory is an attempt to answer a certain question, and each question presupposes certain theories. Because of such hierarchical relations, it is possible to characterize a discipline by identifying a set of [[Core Question| core questions]], ''Q<sub>CA</sub>''. These core questions are judged by some [[Epistemic Agent| agent]] to be related to one another, essential to a discipline, and definitive of its boundaries. The other questions of a discipline are subquestions of its core questions. A set, as such, can't be part of a scientific mosaic consisting of theories and questions. We, therefore, take a discipline to be defined by a [[Delineating Theory| delineating theory]] that identifies the set of core questions ''Q<sub>CA</sub>'' characterizing that discipline.  +
[[Epistemic Stances Towards Theories|Theories]] and [[Epistemic Stances Towards Questions| questions]] can both be the subject of the epistemic stances of [[Epistemic Agent|epistemic agents]]. [[CITE_Barseghyan (2018)]][[CITE_Rawleigh (2018)]][[CITE_Patton (2019)]] [[Discipline| Disciplines]] like biology, physics, and astrology can also be the subject of such stances. For example, biology and physics are accepted by the scientific community of the modern world as disciplines, but astrology is rejected. In our definition, a discipline is said to be accepted by an epistemic agent if that agent accepts the [[Core Question| core questions]] specified in the discipline's [[Delineating Theory|delineating theory]], as well as the delineating theory itself.[[CITE_Patton and Al-Zayadi (2021)]] This definition takes discipline acceptance to be derivative of [[Theory Acceptance|theory acceptance]] and [[Question Acceptance|question acceptance]]. It requires first, that an agent accepts the delineating theory that specifies that a particular set of core questions characterize a discipline. For example, the scientific community accepts that the question 'how do matter and energy behave? is a core question of modern physics. The community also accepts the question itself. Therefore, they can be said to accept physics as a discipline. The scientific community of the modern world also accepts that the question 'how do the positions of celestial objects at the time of one's birth influence one's character?' is a core question of astrology. However, they do not accept the question itself, because they reject its supposition that such an influence exists. Thus, the scientific community rejects the discipline of astrology.  +
Nicholas Overgaard explains the topic  +
No [[Theory|theory]] acceptance may take place in a genuinely dogmatic [[Scientific Community|community]]. "Namely," as is noted in [[Barseghyan (2015)]], Barseghyan notes, when introducing '''the theory rejection theorem''' in [[Barseghyan (2015)]], "theory change is impossible in cases where a currently accepted theory is considered as revealing the final and absolute truth".[[CITE_Barseghyan (2015)|p. 165]]  +
Suppose a community has an accepted theory that asserts that it is the final and absolute truth. By the [[The Third Law (Barseghyan-2015) |Third Law]] we deduce the method: accept no new theories ever. By the [[The Second Law|Second Law]] we deduce that no new theory can ever be accepted by the employed method of the time. By the [[The First Law (Barseghyan-2015)|First Law]], we deduce that the accepted theory will remain the accepted theory forever.[[CITE_Barseghyan (2015)|p. 165-167]] [[File:Dogmatism-theorem.jpg|607px|center||]]  +
Barseghyan emphasizes that with the [[Dogmatism No Theory Change theorem]], "we can easily distinguish between genuinely dogmatic communities and communities which only ''appear'' dogmatic".[[CITE_Barseghyan (2015)|p. 166]]. He presents the following example: <blockquote>It was once believed that the medieval scientific community with its Aristotelian mosaic was a dogmatic community, for it (allegedly) held on to its theories at all costs and disregarded all new theories. Yet, upon closer scrutiny it becomes obvious that the Aristotelian-medieval community was anything but dogmatic. Had the medieval community indeed taken a genuinely dogmatic stance, no scientific change would have been possible in their mosaic. But it is a historical fact that the Aristotelian-medieval mosaic was gradually changing especially in the 16th and 17th centuries; towards the end of the 17th century many of its key elements were replaced by new elements. Finally, by circa 1700 the Aristotelian-medieval system of theories was replaced with those Descartes and Newton. This would have been impossible had the theories of the mosaic been actually taken as revealing the final truth. Thus, the Aristotelian-medieval community was not dogmatic. For some real examples of dogmatic communities think of those communities which, having started with some dogmas, fanatically held on to those dogmas and never considered their modification possible.[[CITE_Barseghyan (2015)|p. 166-7]]</blockquote>  +
A '''substantive method''' is one that presupposes at least one contingent proposition; one that depends on the state of something in the external world. According to our understanding of contingent propositions, all such propositions are '''fallible'''. As such, any substantive method will necessarily presuppose at least one contingent proposition, and is therefore fallible. Thus, by the '''synchronism of method rejection''' theorem, the rejection of a theory can result in the rejection of a method, rendering all substantive methods dynamic.  +
Here is the deduction as it appears in Barseghyan (2015):[[CITE_Barseghyan (2015)|p. 220]] <blockquote> According to the thesis of fallibilism, accepted in the contemporary epistemology, no contingent proposition (i.e. proposition with empirical content) can be demonstratively true. Therefore, since substantive methods are based on fallible contingent propositions, they cannot be immune to change. Imagine a typical mosaic with an accepted theory and a method that implements the constraints imposed by this theory. It is obvious that the method in question is necessarily substantive (by the definition of substantive method). Now, suppose that there appears a new theory that manages to satisfy the current requirements and, as a result, replaces the accepted theory in the mosaic. Naturally, this new theory imposes new abstract constraints (by the third law). It is conceivable that these new abstract constraints are incompatible with the requirements of the current method. In such an instance, the old method will be replaced by the new one (by the method rejection theorem). In short, a rejection of theories can trigger a rejection of the substantive method. This idea has been already implicit in the synchronism of method rejection theorem. Thus, there are no guarantees that an employed substantive method will necessarily remain employed ad infinitum. Consequently, any substantive method is necessarily ''dynamic''.</blockquote> [[File:Dynamic-substantive-methods.jpg|607px|center||]]  +
One example is the transition from the controlled trial method to the blind trial method and then to the double blind trial method. Blind trials were introduced as an implementation of the more abstract method that required to account for the placebo effect on patients when testing drugs. Once the placebo effect became known, the method changed. And after, when it became known that the experimenter's bias also had a role on patients when testing drugs, the method changed once more, from blind to double-blind.  +
Another example is the transition from the Aristotelian-Medieval Method to the Hypothetico-Deductive Method. While in the former it was assumed that there was an essential difference between natural and artificial, and that therefore the results of experiments, being artificial, were not to be trusted when trying to grasp the essence of things, in both the Cartesian and Newtonian worldviews such a distinction was not assumed and therefore experiments could be as reliable as observations when trying to understand the world. Once the theories changed (from the natural/artificial distinction to no such distinction) the methods changed too (from no-experiments to the experimental method).  +
E
According to Oh, there is such a thing as element decay.[[CITE_Oh (2021)]]  +
Element decay is not a scientonomic phenomenon and, thus, is outside of the scope of scientonomy.  +
A method is said to be ''employed'' at time ''t'' if, at time ''t,'' theories became accepted only when their acceptance is permitted by the method. [[CITE_Barseghyan (2015)|p. 53]] ''The second law'' of theory acceptance is a direct consequence of ''employed method'' as it is defined.  +
According to this definition of the term, ''employed method'' is nothing but the actual expectations of a certain community at a certain time. This is in tune with the actual scientonomic usage of the term. It is safe to say that this definition is tacitly used throughout Barseghyan's [[Barseghyan (2015)|''The Laws of Scientific Change'']]. For instance, when he says that the method of intuition schooled by experience was employed by the community of Aristotelian-Medieval natural philosophers, he actually means that this community expected new theories to be intuitively true.[[CITE_Barseghyan (2015)|pp. 143-145]][[CITE_Patton, Overgaard, and Barseghyan (2017)|p. 35]] When he says that the double-blind trial method is currently employed in drug testing, he means that "the community expects new drugs to be tested in double-blind trials".[[CITE_Patton, Overgaard, and Barseghyan (2017)|p. 35]][[CITE_Barseghyan (2015)|pp. 134-142]] Originally, this tacit definition of employed method has been repeatedly conflated with [[Employed Method (Barseghyan-2015)|the official definition of the term]] given on page 54 of ''The Laws of Scientific Change''.[[CITE_Barseghyan (2015)|pp. 54,144,145]] However, a community’s expectations were not mentioned in Barseghyan's [[Employed Method (Barseghyan-2015)|original definition]] of employed method. This new definition of ''employed method'' as "expectations of the community" was suggested to fix this conflation.  +
[[Joshua Allen]] makes a case for this broad definition of the term. According to Allen:[[CITE_Allen (2023)|pp. 75-76]] <blockquote> the broader the definition, the more likely it is to account for all epistemic practices conducted throughout history and across all geographies. Any narrowing of the definition risks excluding epistemically relevant practices that we may have simply failed to consider. The above broad framing, therefore, allows for the best chance at covering all actions that one would normally consider epistemic. It is agnostic towards the precise characteristics that may accompany an epistemic action, beyond what could reasonably be assumed to be the bare minimum, an epistemic agent taking an action that somehow involves an epistemic element. This definition has the additional benefit of aligning well with other sister categories in the scientonomic ontology. An epistemic stance, for instance, is understood in scientonomy to refer to the attitude of an epistemic agent towards an epistemic element. Having such similarly phrased definitions across basic notions in scientonomy brings a sense of symmetry to the ontology.</blockquote> Allen makes a case that if we were to go with a narrower definition, we would risk excluding such potentially relevant actions as publishing:[[CITE_Allen (2023)|p. 77]] <blockquote> The act of publishing a textbook does not seem directly to involve an intent to generate or assess epistemic elements. Similarly, while one would be hard-pressed not to view the spreading of knowledge as epistemically relevant, it could be difficult to confirm that an intent to generate or assess epistemic elements is involved. In both of these cases, it is not obvious how they could qualify as epistemic actions under the narrow definition, as they are not necessarily aiming to generate or assess epistemic elements. Yet, actions of publishing textbooks or spreading knowledge more generally could easily be epistemically relevant without being accompanied by such an intent, by way of their place within a broader tapestry of specific scientific practices.</blockquote>  
According to Allen, epistemic actions are a key part of everyday epistemic practice.  +
An ''epistemic agent'' acts in relation to [[Epistemic Element|epistemic elements]] such as theories, questions, and methods. The actions of an epistemic agent amount to taking [[Epistemic Stance|epistemic stances]] towards these elements, such as accepting or pursuing a theory, accepting a question, or employing a method. The stances of an epistemic agent must be ''intentional''. To be so, they must satisfy the following conditions: # the agent must have a semantic understanding of the propositions that constitute the epistemic element in question and of its available alternatives; and # the agent must be able to choose from among the available alternatives with reason, and for the purpose of acquiring knowledge.[[CITE_Patton (2019)]]  +
According to Barseghyan, epistemic agents are an essential part of the process of scientific change, as they take stances towards epistemic elements.  +
This definition attempts to capture what is arguably the key feature of epistemic communities - their collective intentionality to study/know the world. This feature, according to the definition, distinguishes epistemic communities from [[Non-Epistemic Community|non-epistemic communities]], such as political, economic, or familial communities. To use [[Nicholas Overgaard|Overgaard]]'s own example, "it is clear that an orchestra is a community: the various musicians can be said to have a collective intentionality to play a piece of music" and yet its collective intentionality is different from that of knowing the world.[[CITE_Overgaard (2017)|p. 59]]  +
According to [[Nicholas Overgaard|Overgaard]], communities that do not have a collective intentionality to know the world can still have sub-communities that do have such an intentionality. Overgaard illustrates this with the example of Google, a company that can be considered a [[Non-Epistemic Community|non-epistemic community]] as its collective intentionality is that to make profit. Yet, as an innovative company, Google has many sub-communities which do have a collective intentionality to know the world, such as "a research and development team trying to better know Internet technologies, or a marketing team trying to better know how to reach consumers".[[CITE_Overgaard (2017)|p. 59]] By [[Epistemic Community (Overgaard-2017)|definition]], these sub-communities are [[Epistemic Community|epistemic]]. Thus, argues Overgaard, it is possible for an epistemic community to be the sub-community of a non-epistemic community.  +
The claim of the existence of epistemic communities can be traced back to Overgaard, who presented the distinction between epistemic and non-epistemic communities in his [[Overgaard (2017)|''A Taxonomy for the Social Agents of Scientific Change'']].[[CITE_Overgaard (2017)]]  +
According to Barseghyan, epistemic community is an epistemic agent, i.e. it is capable of taking [[Epistemic Stance|epistemic stances]] towards [[Epistemic Element|epistemic elements]].[[CITE_Barseghyan (2018)]]  +
The notion of epistemic agency implies that an agent takes epistemic stances ''intentionally''. That is: * the agent has a semantic understanding of the propositions that constitute the epistemic element in question, and of its alternatives, and * the agent is capable of choosing among them with reason, and with the goal of acquiring knowledge. Communities can meet these conditions. An [[Epistemic Community|''epistemic community'']], by definition, has a collective intentionality to know the world and can thus be said to pursue the goal of acquiring knowledge.[[CITE_Overgaard (2017)]] In order for a community to be a communal epistemic agent, it must be the case that its epistemic stances belong to the community as a whole, rather than simply to its constituent members. To understand how this can be, we must consider some general properties of systems with multiple interacting parts. Such systems, if their parts are appropriately organized in relation to one another, often exhibit ''emergent properties''.[[CITE_Bedau (2003)]][[CITE_Kim (1999)]][[CITE_O'Connor and Yu Wong (2015)]][[CITE_Wimsatt (2006)]][[CITE_Wimsatt (2007)|pp. 274-312]] William Wimsatt defined the emergent properties of a system as those that depend on the way its parts are organized.[[CITE_Wimsatt (2006)]][[CITE_Wimsatt (2007)|pp. 274-312]] ''Aggregate systems'' as those in which the parts do not bear an organized relationship to one another. The parts all play similar causal roles and can be interchanged or rearranged without consequence. The behaviour of the whole is just an additive, statistical consequence of that of its parts and no emergent properties are present. A jumbled pile of electronic parts is an example of an aggregate system. Its properties, like its mass and its volume, are just the sum of the masses and volumes of all its parts. A ''composed system'' possesses new emergent properties due to the way in which its parts are organized in relation to one another. A radio assembled by arranging electronic parts in the proper relation to one another is an example of a composed system. The ability to be a radio is an emergent property because none of the radio's parts exhibit it by itself. The parts are organized so that each one plays its own distinctive, specialized role in producing the emergent property. A number of authors have argued that epistemic communities are organized so as to give rise to emergent properties.[[CITE_List and Pettit (2006)]][[CITE_Palermos and Pritchard (2016)]][[CITE_Palermos (2016)]][[CITE_Theiner (2015)]][[CITE_Theiner, Allen, and Goldstone (2010)]][[CITE_Theiner and O'Connor (2010)]] Wimsatt's ideas have been specifically applied to epistemic communities by Theiner and O'Connor. [[CITE_Theiner and O'Connor (2010)]] An epistemic community is an organized system of individual epistemic agents, each of which makes its own distinctive contribution to the epistemic stances taken by the communal agent as a whole. These roles are determined by institutional and other forms of organization of the communal agent, and involve varied and complementary areas of specialized knowledge. Collective decision-making processes and institutional frameworks interact with and influence the views of individual community members. These allow a community to take epistemic stances towards epistemic elements that are distinct from those its individual members might take if left to their own devices. In an analysis of legal decision-making processes, Tollefsen [[CITE_Tollefsen (2004)]] has shown that there are a variety of circumstances under which a community's epistemic stances are not the simple aggregate of its individual member's stances. Longino [[CITE_Longino (1990)]][[CITE_Longino (2019)]][[CITE_Longino (1996)]] maintains that, when communities have normatively appropriate structures, critical interactions among community members holding different points of view mitigate the influence of individual subjective preferences and allow communities to achieve a level of objectivity in their taking of epistemic stances that are not generally possible for individual agents. Barseghyan [[CITE_Barseghyan (2015)|pp. 43-52]] has argued that the methods used by individual prominent scientists often, in fact, do not coincide with those of their community and that a community's acceptance of a theory is a function of the methods employed by that community rather than individual idiosyncrasies. Thus, it appears that most epistemic communities fit the requirements for communal epistemic agents.  
According to Barseghyan and Levesley, questions can have epistemic presuppositions.  +
Fraser and Sarwar argued that, as an epistemic stance, compatibility can be taken towards epistemic elements of all types.[[CITE_Fraser and Sarwar (2018)|p. 70]]  +
The only stance that an epistemic agent can take towards a method is [[Employed Method|''employment'']], i.e. a method is either employed or unemployed by an agent in theory evaluation.  +
In his [[Barseghyan (2018)|"Redrafting the Ontology of Scientific Change"]], Barseghyan argued that since [[Normative Theory|norms]] are a subtype of [[Theory|theory]], all the epistemic stances that can in principle be taken by an epistemic agent towards theories of all types can also be taken towards norms. In addition to these more universal stances, norms can also be [[Norm Employment|employed]], i.e. they have the capacity of constituting the actual expectations of the epistemic agent. This applies to norms of all types.[[CITE_Barseghyan (2018)]]  +
The stance of norm employment explained by Hakob Barseghyan  +
Rawleigh argued that, just like theories, [[Question|questions]] too can be [[Question Acceptance|accepted]] or unaccepted. A question can be accepted by an agent at one period at not accepted by another.  +
Consider, for instance, the question "what is the distance to the sphere of the stars?" which was once accepted as a legitimate topic of inquiry, but is no longer accepted. Similarly, the question "what is the mechanism of evolution of species?" is accepted nowadays, but wasn't accepted in the 17th century. Thus, we can say that questions acceptance is the stance that epistemic agents take towards questions.  +
It is argued by Sarwar and Fraser that, in addition to the already accepted epistemic stances, the stance of ''scientificity'' can be taken towards theories.[[CITE_Sarwar and Fraser (2018)]]  +
According to Barseghyan, acceptance as an epistemic stance can be taken towards theories.[[CITE_Barseghyan (2015)|pp. 30-32]]  +
According to Barseghyan, the epistemic stance of pursuit can be taken towards theories, i.e. an epistemic agent can find a theory pursuitworthy.[[CITE_Barseghyan (2015)|pp. 30-40]]  +
According to Barseghyan, the epistemic stance of use can be taken towards theories, i.e. an epistemic agent can find a theory useful.[[CITE_Barseghyan (2015)|pp. 30-40]]  +
A physical object or system is an epistemic tool for an [[Epistemic Agent|epistemic agent]] ''iff'' there is a procedure by which the tool can provide an acceptable source of knowledge for answering some [[Question|question]] under the employed [[Method|method]] of that agent. Examples of epistemic tools include rulers, thermometers, the Large Hadron Collider, the Hubble Space Telescope, a written text, a computer, a blackboard and chalk, a crystal ball, etc.  +
There are several different senses in which one might take the concept of scientific error. One is the absolute sense. From our modern perspective, we might judge the geocentric [[Aristotle| Aristotelean-Ptolemaic cosmology's]] claim that the earth is stationary at the center of the universe as an error [[CITE_ Allchin (2001)]]. The sense of error we are interested in here is not this absolute sense of error as judged from a future perspective. Instead, our definition takes the perspective of the historical [[Epistemic Agent| agent]] and the [[Mechanism of Method Employment | method employed]] by the agent at that time. Our definition is grounded in [[Mechanism of Theory Acceptance| the law of theory acceptance]]. When a [[Theory| theory]] is erroneously accepted, the [[Nature of Appraisal| assessing]] agent believes it has satisfied the requirements of their employed method when, in fact, it has not. Error may be due to an honest mistake by the epistemic agent that created the theory, or to scientific misconduct--actions which the theory-creator agent is aware violate the epistemic and moral [[Normative Theory | norms]] of scientific inquiry accepted at the time.  +
The analysis of several several instances of scientific error by [[Sarah Machado-Marques|Machado-Marques]] and [[Paul Patton|Patton]] suggest that the handling of these instances by scientists is in accord with the theory rejection theorem. Handling of error involves, according to this view, not only a rejection of some of the propositions that are considered to be accepted erroneously but also an acceptance of some new propositions. In some cases, an erroneously accepted ''first-order'' proposition is replaced by another ''first-order'' proposition incompatible with it. In other cases, an erroneously accepted ''first-order'' proposition is replaced by a ''second-order'' proposition stating the lack of sufficient reason for accepting the first-order proposition. According to this view, the handling of erroneously accepted theories involves their replacement with other theories; the handling of scientific error is therefore in full accord with the theory rejection theorem.  +
Nicholas Overgaard explains the topic  +
This category encompasses that knowledge which hasn't been openly formulated by the agent but can, in principle, be open formulated. As such the category is agent-relative. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +
According to this definition, knowledge is said to be ''explicit'' if it has been openly formulated by the agent in question. As such the notion of ''explicit'' is agent-relative. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +
F
Scientonomy Workshop, February 25, 2023  +
G
Allen makes a case that while many types of epistemic actions are local, i.e. available to only ''some'' agents at ''some'' periods, there are also global epistemic actions. According to Allen, "taking a stance of acceptance (i.e., accepting) seems to be a global action, as without this epistemic action no process of scientific change seems possible". [[CITE_Allen (2023)|p. 79]]  +
Allen makes a case that there is such a thing as a global epistemic action, (e.g. ''accepting a theory'').[[CITE_Allen (2023)|p. 79]]  +
Allen makes a case that epistemic actions can be global or local.[[CITE_Allen (2023)]]  +
In the scientonomic workflow, the goals of peer review are to assesses a paper for pursuitworthiness of the modifications suggested in the paper. Thus, peer reviewers should not evaluate submissions for acceptability, but only for pursuitworthiness.  +
In [[Nicholas Overgaard|Overgaard]]'s taxonomy, the term ''group'' refers to the most basic societal entity - a set of two or more people. As such, it is meant to play the role of the most abstract class which has two sub-classes - [[Community (Overgaard-2017)|community]] and [[Accidental Group (Overgaard-2017)|accidental group]].[[CITE_Overgaard (2017)]]  +
H
The editors should be granted official ''housekeeping rights'' to create and modify the necessary pages of the encyclopedia to handle ripple effects. Specific handling of ripple effects should depend on whether the additional change is ''implied'' by the modification or whether it is conceivable to accept the modification without accepting the additional change. There are two possible scenarios. # The additional change ''does not alter'' the accepted body of scientonomic knowledge, but merely explicate what is implicitly accepted by the community. In such cases, the editors should create and/or modify the necessary pages of the encyclopedia to handle the ripple effect. # The additional change ''alters'' the accepted body of scientonomic knowledge (i.e. it is possible to accept the original modification without accepting the additional change). In such cases, the editors should introduce these additional changes in a regular fashion by registering them as new suggested modifications for the community to comment. As put by Shaw and Barseghyan: <blockquote>The key rule of thumb here is this: is it conceivable to accept the modification without accepting the ripple effect change in question? If so, then this new ripple effect change should be registered as a new suggested modification and discussed. If not, then no new suggested modification is necessary; instead, the editors should modify the encyclopedia to document the ripple effect.[[CITE_Shaw and Barseghyan (2019)|p. 10]]</blockquote>  +
Hierarchical authority delegation is a sub-type of [[Multiple Authority Delegation (Loiselle-2017)|multiple authority delegation]]. It describes a situation in which a community delegates authority over some [[Question|question]] to multiple communities, but at different degrees of authority. Consider a case of multiple authority delegation in which either expert A OR expert B might be consulted. If the word of expert A is always accepted over the word of expert B, we have a case of hierarchical authority delegation. Here is an example from the art world. The Modigliani ''catalogue raisonée'' by Ambrogio Ceroni is widely regarded by the art market as being the most reliable source when it comes to matters of Modigliani attribution. That being said, it is also widely accepted that the catalogue is incomplete. In 1997, Modigliani scholar Marc Restellini began creating a new catalogue raisonée for the artist. Between 1997 and 2015 (when Restellini's project was abandoned), the art market held a relationship of hierarchical authority delegation with Ceroni and Restellini. If the painting was listed in the Ceroni catalogue, it was considered authentic, regardless of Restellini's opinion. If it was not in the Ceroni catalogue but ''was'' considered authentic by Restellini, then it was accepted as such by the art market. The fact that both Ceroni and Restellini were valued as independent authorities makes this an instance of multiple authority delegation; the fact that Ceroni's word was valued over Restellini makes it a case of hierarchical authority delegation.  +
The definition tweaks the [[Hierarchical Authority Delegation (Loiselle-2017)|original definition]] of the term by [[Mirka Loiselle|Loiselle]] to ensure that the relationship of hierarchical authority delegation can obtain between [[Epistemic Agent|epistemic agents]] of all types. It also substitutes [[Question|''question'']] for ''topic'', as the former is the proper scientonomic term that should be used.  +
To reconstruct the state of a ''mosaic'' at time ''t,'' it is necessary to understand which theories were accepted at the time, and which methods were employed at the time. [[CITE_Barseghyan (2015)|p. 12]] This process is the concern of historical (empirical) questions. '''Theory of scientific change,''' although closely linked is concerned with theoretical questions. ''History of Scientific Change'' is one of the key concepts in current scientonomy. Thus, its proper definition is of great importance to recognize a descriptive theory of ''scientific change.''  +
I
Implication aims to capture the idea of one theory "following" from another. Although this idea is usually associated with that of deduction, sometimes deduction is too strict. When a theory constitutes evidence for another theory, that theory may imply the other depending on the strength of the evidence. There is no cutoff point for how strong the evidence needs to be that is shared by all agents. Instead, each agent has some ''rules of implication'' which determine when a theory ''implies'' another. This may be rules of logical entailment, of Bayesian confirmation theory, or of a detective's instinct. Having a notion of implication greatly clarifies an epistemic agent's theoretical thinking. Each agent may have their own rules of implication that differ from the modern concept of logical entailment or of deduction. Furthermore, implication clarifies what "deducible" means in [[The Third Law]]: a theory is deducible from another set of theories if it is implied by that set of theories.  +
Implicit is the opposite of ''explicit'' and, thus, it doesn't require more than a very minimalist definition. This definition creates a strong link between the two concepts and won't require any changes in the definition of ''implicit'' when the respective definition of ''explicit'' happens to change.  +
One putative method of learning the [[Employed Method|''employed method'']] of the time is by studying texts concerning scientific [[Methodology|''methodology'']] to learn what method was prescribed by the [[Scientific Community|community]] or advocated by ''great scientists''. However, such indicators can yield incorrect results. During the second half of the eighteenth century and the first half of the nineteenth century, the scientific community explicitly advocated the ''empiricist-inductivist'' methodology championed by [[Isaac Newton]]. This methodology held that new theories should be deduced from phenomena, and that unobservable entities should not be posited. However, the historical record actually shows that several theories positing unobservable entities did, in fact, become accepted during this period. These include Benjamin Franklin's theory of electricity, which posited an unobservable ''electric fluid'', the ''phlogiston'' theory of combustion, and the theory that light is a waveform in a ''luminiferous ether''. Thus the ''accepted methodology'' [[Scope of Scientonomy - Explicit and Implicit|does not necessarily indicate]] the ''employed method'' of the time. [[CITE_Barseghyan (2015)|pp. 53-54]] More promising indicators of method employment are ''indirect'', via inference from historical facts about what theories are accepted, the process of appraisal, and the prior state of the mosaic. For example, one might note what sort of theories become accepted during a particular time period by some community and try to determine why. If theories become accepted after some novel prediction they make has been confirmed, then the employed method of the time was most likely ''hypothetico-deductive''. On the other hand, if theories do not require confirmed novel predictions to become accepted, then some other method might be the one employed. The most suitable indirect indicators of method employment will vary from case to case with context and culture.  
''Indicators of theory acceptance'' are historical facts that provide evidence indicating that a scientific [[Theory|theory]] was accepted by some [[Scientific Community|community]] at a particular time. The opinions of [[Individual Level|individual scientists]] are not clear indicators of the acceptance of a theory by a community. Possible indicators are sources that typically indicate the opinion of an entire scientific community such as encyclopedias, textbooks, university curricula and the minutes of association meetings. [[CITE_Barseghyan (2015)| pp. 113-117]] Beginning in the eighteenth century, ''encyclopedias'' were a collective undertaking and thus typically good indicators of what was accepted at the time of their publication. However, until recently they were only published sporadically, and so generally can't provide a thorough description of successive states of the mosaic. Modern encyclopedias are a collective undertaking. Before the eighteenth century they were written by either a single author, or an isolated small group. In such cases they may contain theories championed by the author but not necessarily accepted by the community. ''Textbooks'' are typically written with the objective of presenting the current state of knowledge in their field and thus often a good gauge of accepted thinking. But because they are often written by a single or small number of authors, they should be treated with caution. ''University curricula'' similarly typically have the goal of exposing students to an accepted body of knowledge in a field. However, theories that are not considered the best available theory are sometimes nonetheless taught. Classical physics is taught to modern physics students not because it is deemed the best available description of its subject matter but because it is useful for many practical applications and is simpler and easier to understand than the more advanced treatments using quantum physics or general relativity theory. Items can also sometimes be included in a curriculum out of historical interest rather than current value. ''Minutes of association meetings'' can also sometimes be indicative of the stance of a community towards a particular theory. However, minutes can often provide only a fragmentary indication of what was accepted and what was not. No indicators of theory acceptance are universal or conclusive. Indicators are ''contextual'' to their time and cultural circumstances.  
According to Patton, there is such a things as an individual epistemic agents, capable of taking [[Epistemic Stance|epistemic stances]] towards [[Epistemic Element|epistemic elements]].[[CITE_Patton (2019)]]  +
According to Patton, individuals are "capable of taking epistemic stances towards epistemic elements, with reason, based on a semantic understanding of the elements and their available alternatives, and with the goal of producing knowledge".[[CITE_Patton (2019)|p. 82]]  +
The notion of epistemic agency implies that an agent takes epistemic stances ''intentionally''. That is: * the agent has a semantic understanding of the propositions that constitute the epistemic element in question, and of its alternatives, and * the agent is capable of choosing among them with reason, and with the goal of acquiring knowledge. It is clear that a typical individual human being can satisfy these requirements. The main exceptions are prelinguistic infants, or people with certain neurological conditions that render them incapable of understanding propositions. Besides these absolute constraints, the applicability of the definition may also vary as a matter of degree, since individuals may differ one from another in the depth of their semantic understanding of the epistemic element in question and other contextually relevant epistemic elements. Such differences might be produced, for example, by scientific or professional training. An individual's merits as an epistemic agent will be assessed by others based on whether their claims can satisfy the requirements of the [[Method|method]] employed by those others. The issues raised by norms of epistemic merit are best understood in terms of the concept of [[Authority Delegation|authority delegation]].  +
By the ''individual level'' Barseghyan means an "individual scientist who has her own set of ideas and beliefs about the world, and employs certain methods which might be different than the accepted methods of the time".[[CITE_Barseghyan (2015)|p. 43]]  +
The category is agent-relative and encompasses that knowledge which cannot - even in principle - be explicated. The definition was first suggested by [[Hakob Barseghyan]] and [[Maxim Mirkin]] in their ''[[Barseghyan and Mirkin (2019)|The Role of Technological Knowledge in Scientific Change]]''[[CITE_Barseghyan and Mirkin (2019)]] and was restated by Mirkin in his ''[[Mirkin (2018)|The Status of Technological Knowledge in the Scientific Mosaic]]''.  +