text stringlengths 0 17.6k |
|---|
Although the Senate was given authority by Micipsa to arbitrate his will, they now allowed themselves to be bribed by Jugurtha into overlooking his crimes. The Roman Senate organized a commission, led by the ex-Consul Lucius Opimius, to fairly divide Numidia between the remaining two contestants, starting in 116 BC. Ho... |
Bestia. |
Lucius Calpurnius Bestia, consul for the year, was appointed to command the Roman army in Africa against Jugurtha. He was accompanied by Scaurus and other experienced officers, and received an offer of alliance from Bocchus I, king of Mauretania. The defection of Bocchus, his own father-in-law, filled Jugurtha with ala... |
Spurius, Aulus Postumius. |
The consul Spurius Postumius Albinus took command of the Roman army in Africa (110 BC), but failed to carry out energetic action, due to incompetence, indiscipline in his army, and – it was alleged – bribery by Jugurtha. Later in the year Albinus returned to Italy, leaving the command to his brother, Aulus Postumius Al... |
Metellus. |
After Postumius' defeat, the Senate finally shook itself from its lethargy, appointing as commander in Africa the plebeian Quintus Metellus, who had a reputation for integrity and courage. Metellus proved the soundness of his judgement by selecting men as officers for the campaign based on ability rather than of rank. ... |
When Metellus arrived in Africa in 109 BC, he first had to retrain the army and institute some form of military discipline. |
Metellus followed and crossed the mountains into the desert, advancing to the Muthul River. Jugurtha had divided his army into two detachments, one of which (composed of cavalry and the best of his infantry) lay south of the mountain on the right flank of the Romans, who were marching to the river Muthul, which lay par... |
A fresh round of negotiations came to nothing, with Metellus rejecting Jugurtha's heavy concessions and demanding that the king surrender himself into Roman custody. To resist the Romans more effectually, Jugurtha dismissed most of his low-quality recruits, keeping only the most active troops of infantry and light cava... |
At this point Jugurtha retired to the court of his father-in-law, king Bocchus I of Mauretania, who though previously professing friendship for the Romans, now received Jugurtha hospitably, and, without positively declaring war (on Rome), advanced with his troops into Numidia as far as Cirta, the capital. An internal s... |
Metellus was furious at all these developments and decided to make Marius' command a lot more difficult by refusing to let his legions serve under Marius. Metellus sent them back to Italy to join the army of the other consul, Lucius Cassius Longinus, solely to prevent them from being used in Numidia. (Lucius was about ... |
When Gaius Marius arrived in Numidia as consul in 107 BC, he immediately ceased negotiation and resumed the war. Marius marched west plundering the Numidian countryside, seizing minor Numidian towns and fortresses trying to provoke Jugurtha into a set piece battle, but the Numidian king refused to engage. Marius' strat... |
Revelations. |
The Jugurthine War clearly revealed the issues with political corruption at that time and to come. The fact that a man such as Jugurtha could have his treachery, conquests, and defiances ignored simply by buying Roman military and civil officials reflected Rome's moral and ethical decline. Romans now sought individual ... |
The Roman historian Sallust wrote a monograph, Bellum Jugurthinum, on the Jugurthine War emphasising this decline of Roman ethics. He placed it, along with his work on the Catilinarian Conspiracy, in the timeline of the degeneration of Rome that began with the Fall of Carthage and ended with the Fall of the Roman Repub... |
Gerhard Flesch: |
'Gerhard Friedrich Ernst Flesch' (8 October 1909 – 28 February 1948) was a German SS functionary during the Nazi era. After World War II, he was tried, found guilty and executed for his crimes, specifically the torture and murder of members of the Norwegian resistance movement. |
Background. |
Flesch was an Oberregierungsrat and held the rank of SS-Obersturmbannführer (lieutenant colonel). He was born in Posen, Province of Posen, German Empire. Flesch became a member of Nazi Party (NSDAP) in 1933. In 1934, he obtained his law degree and by 1936 was a member of the Gestapo, when Reinhard Heydrich appointed hi... |
Career in World War II. |
After the outbreak of the war in September, 1939, Flesch became leader of Einsatzkommando 2/VI in Poznań . Between 20 and 23 October 1939, the 14 Einsatzkommando that he commanded executed 275 Poles in the Greater Poland region near Poznań who were named as Polish patriots by Wolfgang Bickerich, the Lutheran pastor in ... |
In 1940, Flesch joined the 3rd SS Division Totenkopf in their march into France. He had a position as Regierungsrat (Executive Council, government advisor), and was an SS-Sturmbannführer (major) in April 1940, when he was assigned to Norway. His first job in Norway was Kommandeur der SiPo und des SD in Bergen (the Sich... |
Trial and execution. |
Flesch was known for being a notorious torturer, and ordered the execution of many members of the Norwegian resistance movement without any trial. After World War II, in 1946, he was tried for the many cases of torture and murder. He was charged with a series of war crimes committed in Norway; seven instances of orderi... |
Devil's Sea: |
The , also known as the 'Devil's triangle', the 'Dragon's Triangle', the 'Formosa Triangle' and the 'Pacific Bermuda Triangle', is a region of the Pacific, south of Tokyo. The Devil's Sea is sometimes considered a paranormal location, though the veracity of these claims have been questioned. |
Description. |
In August of 1945 a Mitsubishi A6M Zero supposedly went missing. A distress radio transmission from Zero F Wing Commander pilot Shiro Kawamoto crossing the Triangle near the end of the war created more questions than answers. The last thing his message said was "...something is happening in the sky...the sky is opening... |
On 4 January 1955, Japanese ship Shinyo Maru No. 10 (第十伸洋丸) lost radio contact near Mikura-jima. Japanese newspapers then began to label the location as ma no umi until the ship was found safe on 15 January. Yomiuri Shimbun showed a map of the sea with points of several other ships that had been lost in recent years, a... |
In the U.S., The New York Times introduced this incident with the term "The Devil's Sea," where nine ships had been lost in perfect weather. Yomiuri Shimbun described the size of the ma no umi as follows: "From the Izu islands to east of the Ogasawara islands; about 200 miles east to west, and about 300 miles north to ... |
In Daniel Cohen's 1974 book Curses, Hexes & Spells, it's reported that legends of the danger of the Dragon's Triangle go back for centuries in Japan. Its most famous casualty was the No. 5 Kaiyō-Maru, a scientific research vessel, which disappeared with the loss of all hands on 24 September 1952. With such a dramatic h... |
Research also explores natural environmental changes, as the cause of such controversial anomalies in the Dragon's Triangle. One of these explanations is the vast field of methane hydrates present on the bottom of the ocean in the Dragon's Triangle area. Methane clathrates (methane hydrates gas) will "explode" when it ... |
These gas eruptions can interrupt buoyancy and can easily sink a ship, leaving no trace of debris. Another explanation for this "paranormal" activity could be the undersea volcanoes that are very common in this area. It is quite characteristic for small islands in the Dragon's Triangle to frequently disappear and new i... |
Cuss: |
'Cuss' may refer to: |
Information bottleneck method: |
The 'information bottleneck method' is a technique in information theory introduced by Naftali Tishby, Fernando C. Pereira, and William Bialek. It is designed for finding the best tradeoff between accuracy and complexity (compression) when summarizing (e.g. clustering) a random variable 'X', given a joint probability d... |
Information theory of deep learning. |
Theory of Information Bottleneck is recently used to study Deep Neural Networks (DNN). |
Consider <math>X </math> and <math>Y </math> respectively as the input and output layers of a DNN, and let <math>T </math> be any hidden layer of the network. |
Shwartz-Ziv and Tishby proposed the information bottleneck that expresses the tradeoff between the mutual information measures <math>I(X,T)</math> and <math>I(T,Y)</math>. In this case, <math>I(X,T)</math> and <math>I(T,Y) </math> respectively quantify the amount of information that the hidden layer contains about the ... |
They conjectured that the training process of a DNN consists of two separate phases; 1) an initial fitting phase in which <math>I(T,Y)</math> increases, and 2) a subsequent compression phase in which <math>I(X,T)</math> decreases. Saxe et al. in countered the claim of Shwartz-Ziv and Tishby, a view that has been shared... |
Gaussian bottleneck. |
The Gaussian bottleneck, namely, applying the information bottleneck approach to Gaussian variables, leads to solutions related to canonical correlation analysis. Assume <math>X, Y \,</math> are jointly multivariate zero mean normal vectors with covariances <math>\Sigma_{XX}, \,\, \Sigma_{YY}</math> and <math>T\,</math... |
The projection matrix <math>A\,</math> in fact contains <math>M\,</math> rows selected from the weighted left eigenvectors of the singular value decomposition of the matrix (generally asymmetric) |
Define the singular value decomposition |
and the critical values |
then the number <math>M \,</math> of active eigenvectors in the projection, or order of approximation, is given by |
And we finally get |
In which the weights are given by |
where <math>r i = U i^T \Sigma_{XX} U i.\,</math> |
Applying the Gaussian information bottleneck to time series (processes), yields solutions related to optimal predictive coding. This procedure is formally equivalent to 'linear' Slow Feature Analysis. |
Optimal temporal structures in linear dynamic systems can be revealed in the so-called past-future information bottleneck, an application of the bottleneck method to non-Gaussian sampled data. The concept, as treated by Creutzig, Tishby et al., is not without complication as two independent phases make up in the exerci... |
Density estimation. |
Since the bottleneck method is framed in probabilistic rather than statistical terms, the underlying probability density at the sample points <math>X = {x i} \,</math>must be estimated. This is a well known problem with multiple solutions described by Silverman. |
Clusters. |
In the following soft clustering example, the reference vector <math>Y \,</math> contains sample categories and the joint probability <math>p(X,Y) \,</math> is assumed known. A soft cluster <math>c k \,</math> is defined by its probability distribution over the data samples <math>x i: \,\,\, p( c k |x i)</math>. Tishby... |
p(c|x)=Kp(c) \exp \Big( -\beta\,D^{KL} \Big[ p(y|x) \, \, p(y| c)\Big ] \Big)\\ |
p(y| c)=\textstyle \sum x p(y|x)p( c | x) p(x) \big / p(c) \\ |
p(c) = \textstyle \sum x p(c | x) p(x) \\ |
</math> |
The function of each line of the iteration expands as |
'Line 1:' This is a matrix valued set of conditional probabilities |
The Kullback–Leibler divergence <math>D^{KL} \,</math> between the <math>Y \,</math> vectors generated by the sample data <math>x \,</math> and those generated by its reduced information proxy <math>c \,</math> is applied to assess the fidelity of the compressed vector with respect to the reference (or categorical) dat... |
and <math>K \,</math> is a scalar normalization. The weighting by the negative exponent of the distance means that prior cluster probabilities are downweighted in line 1 when the Kullback–Leibler divergence is large, thus successful clusters grow in probability while unsuccessful ones decay. |
'Line 2: 'Second matrix-valued set of conditional probabilities. By definition |
p(y i|c k) & = \sum j p(y i|x j)p(x j|c k) \\ |
& =\sum j p(y i|x j)p(c k | x j) p(x j) \big / p(c k) \\ |
\end{align}</math> |
where the Bayes identities <math>p(a,b)=p(a|b)p(b)=p(b|a)p(a) \,</math> are used. |
'Line 3:' this line finds the marginal distribution of the clusters <math>c \,</math> |
p(c i)& =\sum j p(c i , x j) |
& = \sum j p(c i | x j) p(x j) |
\end{align}</math> |
This is a standard result. |
Further inputs to the algorithm are the marginal sample distribution <math>p(x) \,</math> which has already been determined by the dominant eigenvector of <math>P \,</math> and the matrix valued Kullback–Leibler divergence function |
derived from the sample spacings and transition probabilities. |
The matrix <math>p(y i | c j) \,</math> can be initialized randomly or with a reasonable guess, while matrix <math>p(c i | x j) \,</math> needs no prior values. Although the algorithm converges, multiple minima may exist that would need to be resolved. |
Defining decision contours. |
To categorize a new sample <math> x' \,</math> external to the training set <math>X \,</math>, the previous distance metric finds the transition probabilities between <math> x' \,</math> and all samples in <math>X: \,\,</math>, <math> \tilde p(x i )= p(x i | x')= \Kappa \exp \Big (-\lambda f \big ( \Big| x i - x' \Big ... |
& \tilde p(c i ) = p(c i | x' ) = \sum j p(c i | x j)p(x j | x') =\sum j p(c i | x j) \tilde p(x j)\\ |
& p(y i | c j) = \sum k p(y i | x k) p(c j | x k)p(x k | x') / p(c j | x' ) |
= \sum k p(y i | x k) p(c j | x k) \tilde p(x k) / \tilde p(c j) \\ |
\end{align}</math> |
Finally |
Parameter <math>\beta \,</math> must be kept under close supervision since, as it is increased from zero, increasing numbers of features, in the category probability space, snap into focus at certain critical thresholds. |
An example. |
The following case examines clustering in a four quadrant multiplier with random inputs <math>u, v \,</math> and two categories of output, <math>\pm 1 \,</math>, generated by <math>y=\operatorname{sign}(uv) \,</math>. This function has two spatially separated clusters for each category and so demonstrates that the meth... |
20 samples are taken, uniformly distributed on the square <math>[-1,1]^2 \,</math> . The number of clusters used beyond the number of categories, two in this case, has little effect on performance and the results are shown for two clusters using parameters <math>\lambda = 3,\, \beta = 2.5</math>. |
The distance function is <math>d_{i,j} = \Big| x i - x j \Big |^2</math> where <math>x i = (u i,v i)^T \, </math> while the conditional distribution <math>p(y|x)\, </math> is a 2 × 20 matrix |
& Pr(y i= -1) = 1\text{ if }\operatorname{sign}(u iv i)= -1\, |
\end{align}</math> |
and zero elsewhere. |
The summation in line 2 incorporates only two values representing the training values of +1 or −1, but nevertheless works well. The figure shows the locations of the twenty samples with '0' representing Y = 1 and 'x' representing Y = −1. The contour at the unity likelihood ratio level is shown, |
as a new sample <math>x' \,</math>is scanned over the square. Theoretically the contour should align with the <math>u=0 \,</math> and <math>v=0 \,</math> coordinates but for such small sample numbers they have instead followed the spurious clusterings of the sample points. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.