Severe Local Storms With A slight Chance of Error: Some Remarks From Philosophy and Econometrics Deborah G. Mayo Department of Philosophy Aris Spanos Department of Economics "Meteorology meets Decision Science: Risk, Forecast and Decision, Wednesday 6th June - Friday 8th June 2007.

1

Probability often enters in characterizing inductive inference. A long-standing problem debated by statisticians and philosophers is: What is the role of frequentist probability in qualifying the warrant for particular inferences? What is the relevance of long-run relative frequencies in the “single case”. Examples from meteorology, interestingly enough, are frequently appealed to both in articulating this problem and in suggesting how to solve it! A recent illustration: ….the apparently reasonable question "what is the probability that the earth's global warming will cause an increase in severe storms?" is strictly meaningless from the frequentist point of view, since there is only one earth and repeated experiments where multiple earths are inspected cannot be performed, [although one could talk about earthlike planets]. C. S. Peirce: "universes are not as plenty as blackberries…"

2

There are leading frequentist statisticians who think the answer to the problem may be found in considering primitive (or ordinary) intuitions regarding claims about weather, rain, storms, etc. It is thought these may serve as analogies for the less clear problems of applying probability to statistical inferences. …never mind that the intuitions to which they appeal seem in opposition to each other! This will offer a springboard for raising key issues, and sketching my own answer to the question. The task is not to cash out what "probability" means when applied to single cases in ordinary language, but rather, how formal probabilistic methods, based on frequentist notions, may or should apply to characterizing specific claims about hypotheses regarded as true or false.

3

Statistical inference tools: use data x to ask about aspects of the data generating source as modeled by a statistical distribution. Statistical hypotheses: generally put in terms of parameters governing the statistical distribution. Example. Let the sample be X = (X1, …,Xn), each Xi Normal (N(μ,σ2)), IID, with mean μ and (for simplicity) know σ. A question of interest might be in terms of H:

H: μ < μ∗ (e.g., is the mean temperature less than some value?) May arise in testing hypotheses about μ or estimating μ • Our focus is on cases where such hypotheses are regarded as true or false (not those special cases where their truth can be seen as the outcome of a random process).

• Probability applies to events described in terms of random variables { X > 0} where X is the sample mean.

4

Frequentist Error Statistics Probability enters to characterize statistical inference rules: e.g., ensuring there’s only a slight chance of error; For example, < 5% of the time: • a hypothesis is erroneously rejected, • or that a confidence interval fails to cover the true value of μ (whatever it is). These are the procedures error probabilities Our question really is: • How do frequentist error probabilities of procedures apply to particular statistical inferences? • How do they qualify or justify a particular output? Before considering my own take. I want to consider what has been said by some better known frequentists...

5

Inductive BehaviorPhilosophy J. Neyman: Tests as rules of behavior: “To decide whether a hypothesis, H, of a given type be rejected or not, calculate a specified character, d(x0) of the observed facts [the test statistic]; if d(x) > d(x0) reject H; if d(x) ≤ d(x0) Accept H” (Neyman and Pearson, 1933, p.142). ‘Accept/Reject’ are identified with deciding to take specific actions, e.g., publishing a result, announcing a new effect. In our example, d(x) = ( X − μ)/ σx , where σx = (σ/√n) Reject H at significance level α iff d(x0) > zα. Reject H at significance level α iff d(x0) > zα. “it may often be proved that if we behave according to such a rule ... we shall reject H when it is true not more, say, than once in a hundred times, and in addition we may have evidence that we shall reject H sufficiently often when it is false.” Neyman: the goal of tests is not to adjust our beliefs but to “adjust our behavior” to limited amounts of data so that we ensure only a slight chance of error in the long run.

6

Neyman draws on an analogy with the behavior of primitive humans: Early in human history it was established that rain or severe snow storms follow the appearance of heavy clouds. This is one of many permanencies noted. Although this permanency is not absolute (just as most others are not), human beings and also some animals tend to take cover whenever dark clouds appear in the sky.

7

Dark clouds Æ (behave as if) severe storms Æ take cover. The “permanency” is the relative frequency with which a particular result occurs in repeated trials in which the outcome of any single trial is unpredictable. This permanency or property is measurable and unchangeable to the same extent as the die’s dimensions and weight. Without such rules for good habits, primitive humans would not have survived… Later on there was the detection of the regularity of relative frequencies associated with games of chance. The relative frequency of a die falling with 6 dots. Once these regularities were noticed, the corresponding abstract concepts were easy to create. The hypothesized long-run relative frequency received the label of probability and found useful applications in forming our inductive behavior. (Neyman, 1950, p. 2) Probability grows out of a need for a formal model of primitive inductive behavior—controlling rate of erroneous actions.

8

Rule of Inductive Behavior: Let E1, E2, …Em,…be all possible different outcomes of an experiment or of observations relating to some phenomena. Let a1, a2, …am,… be all the different actions contemplated in connection these phenomena. If a rule R unambiguously prescribes the selection of action for each possible outcome Ei, then it is a rule of inductive behavior. (Neyman 1950, p. 10) Statistical tests are special cases, where the outcomes occur with some probability; in tests the actions are usually limited to two. The justification is rooted in the primitive tendency to adopt good habits for survival. If H0 is true (severe storms) it is better to take action a1: take cover, whereas if H1 is true (no storms), it is preferable not to take cover. The test’s low error probabilities ensure we will not erroneously act too often on average. Likewise, the justification for error statistical procedures in any particular case would be merely that it was good habit to have in the long run.

9

Behavioristic Justification for Confidence Intervals Still dealing with our Normal distribution example: A 95% confidence interval estimation rule outputs inferences of the form ( X − zασx ≤ μ < X + zασx), generic lower CI limit

where σx = (σ/√n).

generic upper CI limit

P (( X − zασx) ≤ μ < ( X + zασx); μ) = 1 – 2α For α = .025 we have zα ~ 2, so our familiar 95% confidence interval Rule: Rule R: Observe

X0

and output:

[( X 0 −2σx) ≤ μ < ( X 0 + 2σx)] In the long run, 95% of such intervals will cover the true μ whatever it is. A good rule of behavior. Act as if the true value is in the interval.

10

Other frequentists have rejected the behavioristic construal as the most satisfactory way to attach probability to a particular conclusion. Their solution to the problem of the ‘single case’ considers ordinary examples about the probability of rain today. Since this will be an argument by analogy, it will concern a different context where probability may arise, one that is thought to help with the difficult one in inference. D. R. Cox (2006) elucidating Fisherian fiducial probability: Consider daily rainfall measured at some defined point in Gothenburg in April. Let W be the event that on a given day the rainfall exceeds 5mm. Ignore climate change, etc. and suppose we have a large amount of historical data recording the occurrence and non-occurrence of W. …ignore possible dependence between nearby days. Then proportions of occurrence of W when we aggregate will tend to stabilize and we idealize this to a limiting value πW, the probability of a wet day. This is frequency-based, a physical measure of the weather-system. We are interested in this one special day, not a sequence of predictions. ..and we stick to a frequentist approach. …. Consider: will it be wet in Gothenburg tomorrow?

11

(1) Neyman: probability is inapplicable (until tomorrow midnight when it is either zero or one). We may follow a rule of inductive behavior, say “It will rain tomorrow”. If we follow such a rule repeatedly we will be right a proportion πW, of times. *(2) The probability of the event W is πW, just as the probability of the next coin toss landing heads is .5 so long as the assumptions of a fair coin toss are met; (the next toss is an IID sample, tomorrow is a typical realization of the process, it is not a member of a relevant subset etc.) We are to take this lesson and apply it to the case of frequentist inference…such as a one sided upper confidence intervals… (μ < X + zπσx) i.e., μ < CIU

As a single statement it “has the evidential force of a statement of a unique event within a probability system”(Cox, 67). But the rules for manipulating probabilities do not apply… (Cox 67)

12

The statements assigned probability in this way fail to obey the probability calculus: Fisher’s Fiducial Fallacy. As before, X = (X1, …,Xn), with Xi Normal IID (N(μ,σ2)), with known σ2. An upper (97.5%) confidence interval estimation rule outputs inferences of the form (μ < X + 2σx)

where σx = (σ/√n).

From the sampling distribution of statistic X , the sample mean would differ from its true mean, whatever it is, (in the negative direction) by more than 2 standard deviations, with probability only .025. This gives the first premise of the following argument. P(μ < ( X + 2σx); μ) = .975, Observe mean

X0

Therefore, P (μ < ( X 0 + 2σx); μ) = .975. So it seems we should be able to substitute the observed mean for the random variable and still have the claim hold: (Fisher 1956): The % of cases that satisfy this inequality is .025. Thus the probability it is satisfied in this case is .025? Fallacy of Probabilistic Instantiation

13

"Fiducial Probability Distribution" Although this may make μ look like a random variable in fact it not; these claims do not hold once X 0 is plugged in for X . Pμ (μ <

X

+ 0σx) = .5

Pμ (μ <

X

+ .5σx) = .7

Pμ (μ <

X

+ 1σx) = .84

Pμ (μ <

X

+ 1.5σx) = .93

Pμ (μ <

X

+ 1.98σx) = .975

So “the rules for manipulating probabilities in general do not apply” but “as a single statement a (1–α) upper limit has the evidential force of a unique event within a probability system”(Cox) The common assumption is that a degree of probability is the way to give a degree of evidential force or warrant, ... I would wish to distinguish: Degree of uncertainty (in the occurrence of an event) vs. degree of evidence force Would we say, John’s buying a ticket in a million ticket lottery is strong evidence he will not win the lottery? But this just follows deductively if the model holds, and doesn’t seem to capture the idea of strong evidence about inferences in science. 14

Probabilities vs Likelihoods It might be suggested that what is intended is that the likelihoods of the different μ values differ X 0 is fixed μ varies Lik(μ; X 0 ) = P( X 0; μ) Values of μ near X 0are more likely given the data but likelihoods do not obey the probability calculus (e.g., they don't sum to 1) Moreover, likelihood will not give a degree of evidential warrant: If H entails X 0 the likelihood of H given X 0 is maximal H entails

X0

X0

Therefore, H is invalid, there are other hypotheses that could also entail or fit the data X 0 Before we take X 0 as warranting H, we want to ensure that the fit we observed is something very improbable or difficult to achieve were H false.

15

While Fisher’s Fiducial instantiation is generally considered, even by his ardent admirers, to be a perplexing lapse,…many have been unable to resist trying to find a key to the puzzle of how under certain conditions, Fisher could be right… Never mind that Fisher himself emphatically denied that probability was the only way to capture uncertainty. (If you know about the history of statistical foundations, you’ll know the philosophical debates were entangled with deep personal animosities, in-fighting, and a host of psychological & political issues… For example, some think Fisher stubbornly insisted on his fiducial probabilities out of refusal to concede to Neyman who showed inconsistencies…)

Before turning to the current-day gambit to make the Fisherian omelet by introducing Bayesian eggs…I consider a different interpretation of these same probabilistic facts….rejecting the behavioristic, fiducial, (and later Bayesian) approaches. Unclear as to whether Fisher or Cox concur….

16

An informal example: Isaac’s high temperature. In testing whether infant Isaac’s temperature (which had been an alarming 104 degrees) has increased or has cooled down so that it’s not far above normal (e.g., 99 degrees), one checks his temperature using a series of well-calibrated thermometers on which the high temperature had initially registered — some old style, some digital, etc. If none register more than 1 degree above normal even though they all pointed to 104 a few hours ago, then this would be a good indication that: H: Isaac’s temperature is now no more than 1 degree over normal. A couple have bells that go off when it’s 101 or more, and this time none went off … so the new readings are even better evidence his temperature < 101… H, we would say, has passed a severe test: were Isaac’s temperature beyond 99, at least one of these thermometers would almost certainly have detected this. What justifies this? Would we say, I would rarely be wrong in following this rule? Is my justification that this is a prudent rule to be in the habit of acting upon? Seems silly. Rather, knowledge of the temperature detection abilities of the scales characterize what they are capable of doing each 17

and every time — any baby that passes through such a test has temperature no more than 1 degree above normal. Note too: I don’t want to know how probable it is that his temperature is down — I don’t care to know if say, 95% of babies would have close to normal temperatures. (Aside: With a low prior to H: baby’s temperature goes down in only 2 hours, the posterior to the claim that Isaac’s has gone down is small, despite the evidence.) Given the evidence — I want to know approximately what Isaac’s temperature is, and in particularly if it’s come down close to normal. This use of error probabilities to characterize the properties of procedures delivers just that.

18

Consider again the inference to a particular CIu based on x0: H(x0):

μ <

X0

+ 2σx

(Perhaps μ is mean temperature in a given ocean) Suppose in fact that this inference is false, and the true mean is > CIu that is, μ* = CIu + k. Then it is very probable we would have observed a larger sample mean: P( X >

X 0;

H(x0) is false) > .975

i.e., P( X >

X 0;

μ*) > .975

Therefore, X 0 is good evidence that H(x0) is true, i.e. μ < X 0 + 2σx (1) Data

X0

accords with H(x0) (all the μ < CIu)

(2) Were H(x0) is false, then with high probability a more discordant result would have occurred H(x0) passes a severe test with x0. The degree of severity is at least .975. The larger the hypothesized value of μ the more severely it has been ruled out by this data.

19

There are many different ways to verbalize the same piece of reasoning. (This has a precise correspondence to a statistical significance test: H0: μ= μ0 vs. H1: μ < μ0 all the values within the CI are deemed “acceptable” or “consistent with x at the specified level”)

It is the way I appeal to severity to avoid ‘fallacies of acceptance’ in testing (still within our Normal IID sample): H0: μ= μ0 vs. H1: μ > μ0 Failure to reject H0 at a low level of significance cannot be taken as evidence H0 is precisely true; severity leads to using the data to determine which discrepancies the test had good capability of detecting and which not. The upshot is akin to reporting several different upper confidence limits; unlike the confidence interval where all points are treated on par, each corresponds to a different level of severity: μ <

X0

+ zασx may be inferred with severity (1 - α)

one would want some high and some low benchmarks — the latter to show what inferences are not warranted.

20

Could one take these readings as evidence that Isaac’s temperature is equal to the 98.6 standard? No because there is always some degree of difference beyond the sensitivity of these instruments. Since this procedure would fail to detect those differences with fairly high probability, even were they present, failing to do so is not good ground that they are absent. Error probabilities may be used to ensure and evaluate the capacities of tests to detect errors and discrepancies; once the data are in hand, we give a post-data assessment of the test’s probativeness in relation to various errors. We reach and critically evaluate inferences accordingly.

SEVERITY PRINCIPLE: • Infer H just when H has withstood a severe probe of error: one that would have (with high probability) signaled the presence of a given discrepancy or error, even were it being committed. • If a procedure had a very low probability of signaling the presence of a discrepancy, even if present, then failure to signal it, is poor evidence that it is absent.

21

Following this inductive rule has low long-run error probabilities, but what warrants the inference is that the test had a high probative capacity for detecting error in the inquiry at hand — same self-correcting rationale is open to quantitative and qualitative tests One may apply probabilities to events, as usual, by warranting inferences to statistical hypotheses about their distribution; and in these probabilities attach to the test procedures, not the hypotheses. One is led to the test statistic and corresponding CI by the criteria of ensuring With high probability (1 – α), CIU exceeds the true value of µ: i.e., Pµ (µ < CIU) = 1 – α. while at the same time Pµ(µ*< CIU) is minimal for µ* > µ, The error probabilities become relevant to the particular data by correctly characterizing the probative capacities, precision and reliability of the test for the inference of interest.

22

Let us go back to the drive to make good on (what is thought to be) a Fisherian attempt to have error probabilities rub off on particular inferences — at least in special cases… He was an ardent anti-Bayesian and did not appeal to any prior probabilities to μ which would allow it to be seen as a random variable.

23

Bayesian Posteriors: Probability applies to the single case as an assignment of degree of actual or rational belief in hypotheses. Based on assigning prior probabilities to an exhaustive set of statistical hypotheses, which are then updated by means of likelihoods of the various hypothesis given the data. A statistically significant α difference from H0 can correspond to large posteriors in H0 From the Bayesian perspective, significance levels could not give post-data measures of inductive evidence for the single case… From the significance test perspective: the recommended priors result in highly significant results being construed as no evidence against the null — or even evidence for it! The conflict often considers the two sided version of our test with X = (X1, …,Xn), each Xi Normal (N(μ,σ2)), IID, with σ known: H0: μ = μ0, versus H1: μ ≠ μ0. To discriminate H0: μ = μ0 vs. H1: μ ≠ μ0 If | X 0 - μ0 | > 2σx then level.

24

X0

is statistically significant at the .05

Two one-sided α level tests are combined: the significance level is 2α because of a "selection effect". “Assuming a prior of .5 to H0, with n = 50 one can classically ‘reject H0 at significance level α = .05,’ although P(H0 | x) = .52 (which would actually indicate that the evidence favors H0).” This is taken as a criticism of significance levels, only because, it is assumed the .51 posterior is the appropriate measure of the beliefworthiness. As the sample size increases, the conflict becomes more noteworthy. If n = 1000, a result statistically significant at the .05 level leads to a posterior to the null of .82! SEV(H1) = .95 while the corresponding posterior has gone from .5 to .82. What warrants such a prior? n (sample size) ______________________________________________________ p t n=10 n=20 n=50 n=100 n=1000 .10 .05 .01 .001

25

1.645 1.960 2.576 3.291

.47 .37 .14 .024

.56 .42 .16 .026

.65 .52 .22 .034

.72 .60 .27 .045

.89 .82 .53 .124

(1) Some claim the prior of .5 is a warranted frequentist assignment: H0 was randomly selected from an urn in which 50% are true (*) Therefore P(H0) = p H0 may be mean ocean temperature, mean increased risk of drugs, mean deflection of light. What should go in the urn of hypotheses? For the frequentist: either H0 is true or false; the probability in (*) results from that same fallacious instantiation.

26

Reference or “Impersonal” Bayesians Because of the difficulty of eliciting subjective priors, and because of the reluctance among scientists to allow subjective beliefs to be conflated with the information provided by data, much current Bayesian work in practice favors conventional “default”, “uninformative,” or “reference”, priors . (Cox and Mayo 2007) In the two-sided testing, this gives .5 to H0, the remaining .5 probability being spread out over the alternative parameter space, Jeffreys. This “spiked concentration of belief in the null” is at odds with the prevailing view “we know all nulls are false”. We have just seen the upshot of such an assignment — • Reference Bayesians seem to consider testing very difficult, and have largely put it aside to consider confidence intervals • Use of a “flat” prior on μ allows a posterior probability that matches the confidence level Π(μ <

X0

+ zασx | X 0 ) = 1 - α

So their posterior can “match” the error probability seeming then to justify Fisher’s illicit fiducial probabilities to μ.

27

1. What do reference posteriors measure? • A classic conundrum: there is no unique “noninformative” prior. (Supposing there is one leads to inconsistencies in calculating posterior marginal probabilities). • Any representation of ignorance or lack of information that succeeds for one parameterization will, under a different parameterization, entail having knowledge. Contemporary “reference” Bayesians seek priors that are simply conventions to serve as weights for reference posteriors. • not to be considered expressions of uncertainty, ignorance, or degree of belief. • may not even be probabilities; flat priors may not sum to one (improper prior). If priors are not probabilities, what then is the interpretation of a posterior? 2. Priors for the same hypothesis changes according to what experiment is to be done! Bayesian incoherence. If the prior for H represents information why should it be influenced by the sample space of a contemplated experiment? Violates the likelihood principle — the cornerstone of Bayesian coherency Seems to wreck havoc with basic Bayesian foundations, but without the payoff of an objective, interpretable output 28

3. Reference posteriors with good frequentist properties Reference priors are touted as having some good frequentist properties, at least in one-dimensional problems. They are designed to match frequentist error probabilities. If you want error probabilities, why not use techniques that provide them directly? Coverage Probability P(μ <

X0

+ 2σx |

X 0)

= .975

Some even call the reference posterior of .025 the “Bayesian error probability” for (μ > X 0 + 2σx) (e.g., Jim Berger) Is this different from the behavioristic approach? One can’t really tell what’s going on? But if frequentist error probabilities don’t warrant rational degrees of belief in statistical inferences then, why do they warrant them just because you found a prior designed to make the .975 the posterior probability? Moreover, such cases are very special, in others the reference posterior can correspond to inferences with poor frequentist error probabilities (e.g., stopping rules: keep sampling until μ is excluded from the 2-sided Bayesian confidence interval) ….If rejecting the null means act as if there is a severe local storm, I’m not sure this primitive community would have survived! CONCLUDING COMMENTS: 29

• Frequentist probabilities apply to events • One needs to infer a statistical model that assigns these probabilities to random variables • Probability is "applicable" to such specific inferences, not to assign degrees of belief, confirmation, or the like but to evaluate the associated severity • error probabilities of frequentist methods may be used to obtain severity assessments By splitting up a substantive inquiry into local and piecemeal hypotheses, we may probe them with severity. The full story of the error statistical account involves relying on a handful of canonical procedures to build an increasingly probative arsenal to simulate and detect errors. Whether this is or can be done with inferences about weather patterns, etc., I leave to others…in particular Aris Spanos…

30

Severe Local Storms With A slight Chance of Error ...

Before considering my own take. I want to consider what has been said by some better known .... 12. (1) Neyman: probability is inapplicable (until tomorrow midnight when it is either zero or one). .... temperature beyond 99, at least one of these thermometers would almost certainly have detected this. What justifies this?

186KB Sizes 0 Downloads 113 Views

Recommend Documents

Two case studies of severe storms in the Mediterranean using AMSU
Jun 27, 2007 - B. M. Funatsu1, C. Claud1, and J.-P. Chaboureau2. 1Laboratoire de ..... cold clouds (Greenwald and Christopher, 2002; Hong et al.,. 2005).

cloudy with a chance of meatballs
English. Translate. Sentence pancake orange juice soup mashed potatoes green peas hamburgers eggs toast hotdogs mustard ketchup soda lamb chops. Jelly cheese broccoli peanut butter bread lettuce. Page 4. While Reading. 1. Why do they call Saturday â€

Cloudy with a Chance of Meatballs
___ I promise to do my best work with my reading group. I promise to always think about what I am reading and explain my thinking to others, out loud and on paper. I promise to ask thick and thin questions. I promise to do my best because I want to l

error comes with imagination: a probabilistic theory of ...
Aug 15, 2006 - I examine Dretske's solutions in detail in the second chapter. In short ..... ferring to particular events, states or conditions: this truck, those clouds,.

A Chance-Constrained Unit Commitment With an ...
energy and reserves scheduling and unit commitment with reliability constraints for .... sible can be hard and 2) the feasible region defined by a chance constraint is .... of intermittent energy sources within a multiperiod framework are included in

Management of Severe Acute Pancreatitis
berg Clinic of the Free University of Berlin. He worked in the surgical .... At the completion of the meeting that came to be known as the Atlanta ... Glasgow have used video-assisted sinus tract endoscopy to access and remove the necrotic ...

Error corrected quantum annealing with hundreds of qubits
Jul 31, 2013 - to optimization based on the observation that the cost function of an optimization ... be encoded into the lowest energy configuration (ground state) of an Ising ..... of errors are one or more domain walls between logical qubits.

Error corrected quantum annealing with hundreds of qubits
Jul 31, 2013 - (1)Department of Electrical Engineering, (2)Center for Quantum Information Science & Technology, .... We devise a strategy we call “quantum annealing cor- ..... ARO-QA grant number W911NF-12-1-0523, and and by.

Characterization of minimum error linear coding with ...
[IM − (−IM + C−1)C]σ−2 δ VT. (20). = √ P. M. ΣsHT ED. −1. 2 x Cσ−2 δ VT. (21) where. C = (. IN +. P. M σ−2 δ VT V. ) −1. (22) and we used the Woodbury matrix identity in eq. 18. Under a minor assumption that the signal covari

WINNERS OF SEVERE WEATHER AWARENESS ... - Catawba County
Startown Elementary School in Catawba. County. Mason's drawing depicted a ... Catawba County Emergency Management. The Catawba County winner was ...

WINNERS OF SEVERE WEATHER AWARENESS ... - Catawba County
Telephone: 828-465-8201. Fax: 828-465- ... counties, in partnership with the American. Red Cross ... Suzan Anderson from the American Red. Cross, Mayor Ann ...

Management of Severe Acute Pancreatitis
University of Central Florida College of Medicine, Orlando, FL. Reprints: ..... considered state-of-the-art supportive therapy, despite the emer- gence of an ...

pdf-14106\storms-my-life-with-lindsey-buckingham-and-fleetwood ...
Try one of the apps below to open or edit this item. pdf-14106\storms-my-life-with-lindsey-buckingham-and-fleetwood-mac-by-carol-ann-harris.pdf.

cloudy with a chance of meatballs 2013 mp4.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. cloudy with a ...