The following text is a chapter from a piece I have written during my studies at the University of Oxford.
If imminence in its most generic sense can be defined as the quality of an event being about to occur promptly and inevitably, then warning is the respective function and activity of detecting, measuring and validating imminence, usually carried out by a state’s foreign intelligence apparatus. Ideally, such warning system is set into operation as early and broad as possible, although there lies a certain irony in the fact that the more in advance a warning is expressed (that means the more remote and distant a threat is perceived), the lesser imminence may figure in it. Conversely, if warning fails to come early ‘enough’ for a state to deliberate on its options of reaction, any threat becomes an imminent one, and the likelihood of resorting to force due to the lack of less invasive protective alternatives increases. If early warning fails, the last resort becomes the first choice.
“[Therefore, F. S.], the benefits of early warning of emerging crises are obvious. It provides more time to prepare, analyze and plan a response and, in the event of intervention, enhances its likelihood of success. Early warning can also contribute to the establishment of goals to be achieved, development of courses of action and their comparison, leading eventually to implementation of chosen options, and finally analysis of the reaction of the parties involved and potential scenarios. Because of the importance of early warning, crisis-management and conflict-prevention procedures focus in the early stages on information acquisition, assessment and analysis.”[1]
It is the primary task of intelligence to assist political decision-makers in dealing with and making sense of uncertainty, or, simply put, to anticipate and prevent surprises affecting a state’s national security, namely its territorial integrity and political independence, by providing actionable assessments and analysis of unfolding events. While tactical or incident warning refers to specific “hot-button issues” and concrete threats that are imminent by definition (e. g. ‘smoking guns and ticking bombs’), early or strategic warning can be described as the “timely analytic perception and effective communication to policy officials of important changes in the level or character of threats to national security interests […, and, F. S.] changes in the level of likelihood that an enemy will strike.”[2]
Strategic issues may, if neglected, become tactical ones, and so the analytical nexus between the two, which remains prone to various heuristic biases, must be centered on ensuring readiness and responsiveness rather than on averting and domesticating surprise, making scenario analysis indispensable in addition to evaluating the steady indicators put in place and the manifold random raw bits of information collected. Even though asking ‘what if’, and all of the associated assumptions, hypotheses, and intelligence tradecraft do oftentimes fill informational gaps, ‘unknowns’ remain, both those which are just not yet known (e. g. to what exact level has Iran enriched its uranium?), and those which cannot be known at all (e. g. what will happen when Tunisian street vendor Mohamed Bouazizi[3] sets himself on fire?).
Warning aims at exposing causes and predicting effects, making sense of contingency, evaluating the consequences of potential action or inaction, and testing the explanatory power of the proverbial informational ‘needle in the haystack’[4]. It is about connecting inconspicious and incoherent pieces of information from all available and accessible sources and indicators, making patterns visible and mapping the variables of a given threat’s probability[5], deducting conclusions, and making educated guesses based on prior experience, but it also encompasses defining and not merely deciphering threats. Especially for practical reasons (i. e. resources), strategic early warning “focuses on indications of hostilities and does not spread its consideration to all matters of general intelligence significance. […] [Indeed, this, F. S.] system has served to reduce the number of alarmist ‘flaps’.”[6] As a matter of fact, not only resource constraints require warning to be selective, but even more so the attention span of the recipients of intelligence. Intelligence that red-flags any eventuality whatsoever is neither actionable, nor is it precise or relevant.
“Selectivity involves rejection, and rejection involves risk. If intelligence is to eschew the shotgun approach [i. e. warning against anything and everything, F. S.] in the interests of being read and respected, it will have to pick from the voluminous mass of often fragmentary and sometimes contradictory data a limited number of items to pass along, and sometimes what it rejects will later prove to be important.”[7]
In a nutshell, intelligence warning addresses and strives after balancing three aspects of a threat: its importance, likelihood and imminence.
Importance, again, refers to the threat being largely inevitable if no action is taken, and, over and above, sufficiently grave or ‘overwhelming’ once it materializes (requiring warning to also assess an event’s impact based on the different scenarios of escalation), whereas imminence and likelihood are in a way a hybrid, mutually reinforcing factor. Importance is directly linked to a government’s national security priorities and the respective definitions of threats. This certainly affects a state’s interpretation and application of the legal provisions of Article 51, too, regarding the threshold of when exactly what risk may be recognized as a ‘threat of [armed] force’, initially notwithstanding UN approval. Moreover, importance serves as the primary filter (cf. selectivity) for ranking threats pursuant to their severeness, and for communicating them accordingly (the ‘hierarchy’ of threats typically steers the distribution of intelligence warning, i. e. as a rule of thumb: the more important and concrete the threat, the more senior the recipient of the information).
Likelihood or probability is indeed the crux of the matter, because – paraphrasing Niels Bohr – “prediction is very difficult, especially about the future”. In comparison, estimating and rating the importance of a threat based on its consequences is a rather straightforward, in many cases predominantly academic undertaking. Just as importance, likelihood is still asking ‘what if’, and if the chain of events will necessarily result in a use of force, but it is subject to a greater and more complex operational interdependence, for it is almost entirely measured with the help of actual intelligence information (‘indicators’) gathered by various open and covert channels and means (e. g. human sources, geospatial data, electronic signals etc.) on site (i. e. mobilization of troops, preparations for attack etc.).
Not only does early warning estimate how likely it is for a certain event to occur, but also – closely related in probability theory – whether a statement is true or false, making validation and verification, in other words: finding proof and collecting evidence, indispensable facets of assessing likelihood (e. g. is Iran really capable of enriching uranium to 60%?).
“[It can be argued, though, that for reasons of practicability, F. S.] [t]he absence of specific evidence of where an attack will take place or of the precise nature of an attack does [still, F. S.] not preclude a conclusion that an armed attack is imminent for purposes of the exercise of a right of self-defense, provided that there is a reasonable and objective basis for concluding that an armed attack is imminent.”[8]
The significance of validation and verification make for a somewhat formal and logical approach operating with terms and concepts beyond normative judgments, and both are crucial for improving collection and analysis as well as for ensuring legal review which will be qualified to challenge missing or weak evidence much more than it will be competent to scrutinize a state’s rating of a threat’s general strategic importance. Apart from that, the whole idea of ‘measuring’ likelihood and imminence allegorizes the wishful thinking of intelligence warning to represent a scientifically rigorous and incontestable activity which it is, of course, not, and which it can never credibly be. In any case, it is rendered null and void by subscribing to Cheney’s ‘One Percent Doctrine’ since there hardly is such thing as a clear-cut ‘zero-probability event’ in international relations.
Imminence, as has been demonstrated, must not replace importance (or the quality of a threat to be grave and overwhelming enough to constitute necessity), nor must it play a greater role than the latter or even be misused as a pretext in resorting to force in self-defense. Its ‘attractiveness’ as a casus belli (or, at least, as one perspicuous justification among others) is likely due to the nature of intelligence forecasting and warning which play an ever-expanding part in nowaday’s security policy. Sure enough, the nature and origin of threats do change in the course of time, and so does their quality of being imminent and grave. In addition, the mere act of warning significantly influences, potentially even delays or hinders the unfolding of events both explicitly and implicitly, for example a state making public its – so far clandestine – knowledge of another state preparing an attack. Any assessment of a threat’s likelihood and imminence must keep pace with such dynamic developments. Generally speaking, the more prepared and determined a state is to respond to a specific threat with a fully developed strategy, the less ‘threatening’ that threat is. Nonetheless, one and the same threat can be a significant cause of strategic concern for quite a long period of time, e. g. the confrontation between blocs during the Cold War with the underlying nuclear threat being certainly grave and, repeatedly, pretty imminent.
Policy-makers and government officials are the natural recipients of intelligence information, having privileged or exclusive access to the informational advantage offered by their services, and it goes without saying that their oftentimes quite delicate relationship (or their sheer mutual ignorance) poses some very severe problems with regards to an efficient early warning regime. But ultimately, political decision-makers decide on what constitutes a threat’s imminence, either based on sound (or flawed) intelligence analysis and the resulting recommendations, or notwithstanding such assessment. Interestingly, so far no benchmark has been suggested for comparing a government’s ‘protective performance’ between those avid for and those resistant to intelligence contribution. What can be said, though, is that whenever political direction is misemploying intelligence to only flank political decisions which have already been taken (or to give blessings to desired political outcomes), spectacular misjudgments when it comes to the imminence of a threat are the rule, not the exception.
But not only that: in prominent cases where imminence played a major role in a state’s decision to anticipatorily resort to force in self-defense, it was intelligence to be blamed by politics for when that very imminence could not be properly established ex post. The same holds true for situations when warnings have not been passed on at an appropriate time to be comprehensively considered in the political decision-making process, that means early warning coming too early or too late, again demonstrating that intelligence and politics are rarely seamlessly complementing each other for a lack of mutual understanding or trust. Yet, as a momentous side note, we also need to acknowledge that often “these officials had indeed been warned, sometimes repeatedly, but won’t admit it”[9], frequently making it hard to distinguish intelligence from policy failure in the end.
“That said, strategic warning on most issues, most of the time has largely been an intelligence function, the practitioners of which hope will be taken seriously in policymaking. And contingency planning has essentially been a policy function, the practitioners of which hope to garner useful Intelligence Community support. While the record shows a mixture of successful and unsuccessful connections, policy community criticism of strategic warning comes across more vividly in the record than does praise.”[10]
“For evidence-based intelligence advocacy to ever become a reality the culture of intelligence must [therefore, F. S.] fundamentally change.”[11]
This leads to the preliminary finding that the accountability for the accuracy of warning mainly rests with intelligence serving in a supportive, “decision-enhancing” manner, whereas politics will have the final say and responsibility for determining overall necessity, usually also – at least to some extent – on grounds of that warning. These are obviously linked processes, and so it is fair to state that establishing imminence is a “collaborative governmental function”[12], as is enhancing a general defensive awareness and an actual emergency preparedness. In practice, imminence and importance usually denote the very point in time when a potential threat and the issue of forceful protection against it is no longer capable of being suspended from the political agenda and, as the case may be, from public debate. In politics, as opposed to intelligence, imminence is decided upon, not detected, and so there may occassionally be confusion about the nature of self-defense resulting from its establishment.
“Be that as it may, the use of factual [importance, F. S.] as well as of temporal criteria [proximity, F. S.] in defining ‘imminence’ narrows the gap between pre-emptive [the threat’s imminence matters most, F. S.] and preventive [the threat’s inevitability and severity matter most, F. S.] self-defence.”[13]
Now while there are, as aforementioned, custom and binding international legal conventions governing the use of force in self-defense, including notions of imminence, (international more than domestic) law’s actual role in establishing imminence seems to be rather vague, if not virtual at first glance. In fact, legal considerations and mechanisms gain importance mostly after imminence has been invoked – among other necessary reasons and evidentiary justifications – for resorting to force in anticipatory self-defense, namely with regards to the burden of proof and, most notably, intelligence as evidence. For the purposes of this study, it is quintessential to understand those spheres – the legality of anticipatory self-defense, the burden of proof, and intelligence as evidence – not as self-referential, but as interlinked, and, thus to put them in perspective accordingly.
As already laid out, there is widespread legal consensus that Article 51 can only be invoked in cases involving an ‘armed attack’, and thus any consideration about imminence will have to relate to the definition of what an armed attack is[14], and when it begins to occur, e. g. clarifying whether a series of attacks which, individually, would not be grave enough to constitute the severity required, can be seen as one ‘cumulative’ armed attack?
“Under this view, it is not the imminence of an isolated action that is relevant, but rather the relationship between a series of attacks, and whether there is sufficient reliable evidence to conclude that further attacks are likely, not whether they are imminent.”[15]
“While the [International Court of Justice, F. S.] in the Nicaragua Case hinted that it recognised the ‘cumulative effect’ argument, the Oil Platforms judgment provides the first clear indication that the Court accepts the validity of this approach. This development is significant because the ‘cumulative effect’ approach means that a wider range of attacks may now constitute an ‘armed attack’. This does not necessarily equate to a lowering of the high threshold enunciated in the Nicaragua Case, as the cumulative effect of a series of attacks must still reach this threshold of gravity. However, it does mean that the range of circumstances in which a state is permitted to respond with force in self-defence may now be broader. In particular, recognition of the ‘cumulative effect’ approach is likely to be relevant for states seeking to use force in response to relatively minor but persistent terrorist attacks.”[16]
Now once a grave and imminent armed attack has been spotted, a state has to determine what purpose a presumably permissible act of ‘inherent’ self-defense will have to serve before resorting to it. The defending state has to ponder on its own options for response and anticipate their consequences, ranging from solely preventing the attack triggering Article 51 from being carried out, or eliminating the source of the threat in order to prevent the threat from reocurring, for example the US-led ‘Operation Enduring Freedom’ in October 2001 where imminence did not figure prominently at all[17]. Setting up an appropriate forceful response here and in similar scenarios becomes particularly problematic (namely for reasons of targeting) as the threat may not be originating from another clearly identifiable state entity, but, for example, from a supranational terrorist network only loosely (if at all) associated with a certain harbouring regime. In the meantime, rogue actors have learned to use the confusion caused by this lack of discrimination to covertly leverage their own agendas.
It is significant to assert that while ‘succesful’ terrorist acts are, by definition, surprise acts, they seldomly come without prior warning, both from intelligence as well as from the terrorists themselves who utilize violence as a means of propaganda for their cause. Today, terrorism is not least understood as a form of political violence carried out for drawing the widest-possible attention to an issue, even more so for terrorist groups usually lacking the military capabilities and structures to compete conventionally. So although only few terrorist attacks will actually qualify for meeting Article 51’s severity threshold, the permanent visibility of danger in the post-9/11 era has undoubtedly led to a tense atmosphere of ‘constant imminence’, implicitly giving more weight to the concept. The prevailing question is no longer if (and what kind of) an attack is about to occur, but when and where. If a threat cannot be fully eliminated, it is an intended object of national self-defense to keep it away from a state’s territory, again creating incentives for addressing potential threats abroad before they become ‘too imminent’ at home.
It remains uncontradicted that imminence alone in international law does not suffice to make a case for anticipatory self-defense, but rather serves as a supportive, restraining or enabling aspect of necessity. The further away the threat, the less role it will naturally play, and so the basic sequence in a state’s self-defense rationale will be precaution, prevention, and preemption. All anticipatory forceful self-defense has in common that it constitutes an offensive military strategy against the background of a defensive intent, and in a strict sense reverses the causality of attack and striking back for various reasons, mainly for avoiding actual harm and for using operational advantages to full capacity. Prevention applies to a potential and plausibe future threat or attack, whereas the rather rare distinct cases of preemption are predicated on striking first when there is credible indication and evidence (in most cases stemming from intelligence sources) of an enemy about or underway to attack[18]. Prevention (sometimes referred to as ‘wars of choice’) and preemption (‘wars of necessity’) are therefore, in their justifications, distinguished from each other by imminence (as a matter of timing) and evidence, and, consequently, by the contribution of intelligence (which “plays very different roles depending on whether the context is preemptive war or preventive war”[19]), yet the semantic confusion between the two concepts is oftentimes blurring their explanatory power and validity.
“As one moves from an actual armed attack as the requisite threshold of reactive self-defense, to the palpable and imminent threat of attack, which is the threshold of preventive self-defense, and from there to the conjectural and contingent threat of possible attack, which is the threshold of preemptive self-defense, the need for interpretive latitude [and the, F. S.] burden of proof become ever greater.”[20]
[1] J Kriendler, NATO Intelligence and Early Warning, 13(6) Conflict Studies Research Centre Special Series (2006) 5
[2] J Davis, Improving CIA Analytic Performance: Strategic Warning, 1(1) The Sherman Kent Center for Intelligence Analysis Occasional Papers (2002) 3
[3] Bouazizi’s self-immolation on 17 December 2010 provoked fierce public protest which evolved into the Tunisian Revolution and the wider ‘Arab Spring’.
[4] F Schaurer, The Sparks That Lit The Fire: On the Utility of Triggering Events (2011), <http://osintblog.org/?p=950> accessed 19 January 2013
[5] The analytical instruments in place for classifying and estimating the likelihood of future developments and events, for example ‘Words of Estimative Probability’ (WEP), as well as intelligence writing in general is – at least in the US – undergoing extensive revision after the 9/11 and the Iraq Intelligence Commissions complained about a systemic lack of semantic precision in warning.
[6] T J Patton, The Monitoring of War Indicators (1995), <https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol3no1/html/v03i1a05p_0001.htm> accessed on 9 September 2012
[7] K Clark, On Warning (1995), <https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol9no1/html/v09i1a02p_0001.htm> accessed on 9 September 2012
[8] D Bethlehem (no 6) 775
[9] K Clark (no 27)
[10] J Davis, Strategic Warning: If Surprise is Inevitable, What Role for Analysis?, 2(1) The Sherman Kent Center for Intelligence Analysis Occasional Papers (2003) 14
[11] N Woodard, Tasting the Forbidden Fruit: Unlocking the Potential of Positive Politicization, 28(1) Intelligence and National Security (2013) 107
[12] J Davis (no 22) 6, 2
[13] N Tsagourias (no 10) 22
[14] Just as necessity and proportionality are not explicitly enshrined in Article 51 or anywhere else in international treaty law, a workable definition of armed attack must be derived from custom.
Cf. T Ruys (no 17) 8
[15] A Orakhelashvili, Threat, Emergency and Survival: The Legality of Emergency Action in International Law, 9(2) Chinese Journal of International Law (2010) 363
[16] A Garwood-Gowers, Case concerning Oil Platforms (Islamic Republic of Iran v United States of America): Did the ICJ Miss the Boat on the Use of Force?, 5(1) Melbourne Journal of International Law (2004) 251
[17] „[Still, ‘Enduring Freedom’, F.S.] remains the only internationally accepted example of a use of force directed against a State’s apparatus, where that State did not launch the armed attacks being responded to. […] In the meantime, State practice strongly suggests that the international community has recognized a right to use force in self-defence targeting non-State actors in foreign territory to the extent that the foreign State cannot be relied on to prevent or suppress terrorist activities.”
K N Trapp, Back to Basics: Necessity, Proportionality, and the Right of Self-Defence Against Non-State Terrorist Actors, 56(1) International and Comparative Law Quarterly (2007) 155
[18] “[…] international law holds that truly preemptive attacks are an acceptable use of force in self-defense, while preventive attacks usually are not.”
K P Mueller et al., Striking First: Preemptive and Preventive Attack in U.S. National Security Policy, RAND (2006) xii
[19] J Rosenwasser, The Bush Administration’s Doctrine of Preemption (and Prevention): When, How, Where? (2004), <http://www.cfr.org/world/bush-administrations-doctrine-preemption-prevention-/p6799> accessed on 9 January 2013
[20] W M Reisman (no 4) 87
Following the prevalent definitions of both concepts, Reisman is actually mistaking prevention for preemption and vice versa here.
One Comment