Page to MD
Collection
A dataset of image-text pairs sourced from research papers on arXiv, where each image is derived from a PDF page and paired with its corresponding OCR • 6 items • Updated
image imagewidth (px) 1k 1.76k | markdown stringlengths 1 43.7k |
|---|---|
# Calculating Valid Domains For Bdd-Based Interactive Configuration
Tarik Hadzic, Rune Moller Jensen, Henrik Reif Andersen Computational Logic and Algorithms Group, IT University of Copenhagen, Denmark tarik@itu.dk,rmj@itu.dk,hra@itu.dk Abstract. In these notes we formally describe the functionality of Calculating Val... | |
The significance of this demand is that it guarantees the user backtrack-free assignment to variables as long as he selects values from valid domains. This reduces cognitive effort during the interaction and increases usability.
At each step of the interaction, the configurator reports the valid domains to the user, b... | |
is associated with a Boolean variable and has two outgoing edges low and *high*. Given an assignment of the variables, the value of the Boolean function is determined by a path starting at the root node and recursively following the high edge, if the associated variable is true, and the low edge, if the associated vari... | |
binary encoding v0 **. . . v**ki−1 denoted as v0 . . . vki−1 = enc(j). Also, every combination of Boolean values v0 **. . . v**ki−1 represents some integer j ≤ 2 ki − 1, denoted as j = dec(v0 **. . . v**ki−1). Hence, atomic proposition xi = v is encoded as a Boolean expression x i 0 = v0 ∧ **. . .** ∧ x i ki−1 = vki−1.... | |
<image>
assignment ρ = ρold ∪ {(**x, v**)}. For every variable xi ∈ X, old valid domains are denoted as D
ρold i, i = 0**, . . . , n** − 1. and the old BDD Bρold is reduced to the restricted BDD, Bρ(V, E, Xb**, var**). The **CV D** functionality is to calculate valid domains D
ρ i for remaining unassigned variables ... | |
CV D − **Skipped**(B)
1: for each i = 0 to n − 1 2: L[i] ← i + 1 3: T ← **T opologicalSort**(B)
4: for each k = 0 to |T | − 1 5: u1 ← T [k], i1 ← var1(u1)
6: for each u2 ∈ **Adjacent**[u1]
7: L[i1] ← max{L[i1]**, var**1(u2)}
8: S **← {}**, s ← 0 9: for i = 0 to n − 2 10: if i + 1 < L[s]
11: L[s] ← max{L[s], L[i + 1]}
1... | |
will lead to a node other than T0, because then there is at least one satisfying path to T1 allowing xi = j.
T raverse(**u, j**)
1: i ← var1(u)
2: v0, . . . , vki−1 ← enc(j)
3: s ← var2(u)
4: if **Marked**[u] = j **return** T0 5: **Marked**[u] ← j 6: while s ≤ ki − 1 7: if var1(u) > i **return** u 8: if vs = 0 u ← low... | |
8. Tsang, E.: Foundations of Constraint Satisfaction. Academic Press (1993)
9. Dechter, R.: Constraint Processing. Morgan Kaufmann (2003)
10. Subbarayan, S., Jensen, R.M., Hadzic, T., Andersen, H.R., Hulgaard, H., Møller, J.: Comparing two implementations of a complete and backtrack-free interactive configurator. In:
C... | |
<image>
<image>
As mentioned before, the default sequence-weighting method used by HMMER package is a high quality algorithm. Therefore, we combine both the default HMMER's M matrix in (3) and the of Ms structural matrix in
(4), as shown in (5).
$$\mathbf{M_{s}^{'}=M M_{s}^{T}=\left(\begin{array}{c c c}{{w_{11}m_{... | |
2.3.4 Homologuous Core Structure Structural similarity among proteins can provide valuable insights into their functionality. One way to provide structural similarities is through three-dimensional alignment of proteins, also called *structural alignment*. The goal is to align two or more proteins by trying to overlap ... | |
HMMER-STRUCT
HMMER 10−4
pHMMAcc 10−4
pHMMOi 10−3 pHMM2D 10−3
pHMM3D 10−4
Paired two tailed t-test when comparing performance of each HMMERSTRUCT component with the combined model.
Table 3. HMMER-STRUCT paired t-test
## 4 Discussion
The accuracy of homology detection methods is essential for the problem of infe... | |
## Acknowledgement We Are Grateful To Cnpq For Financial Support. References
Alexandrov,V., Gerstein,M. (2004) Using 3D Hidden Markov Models that explicitly represent spatial coordinates to model and compare protein structures, BMC
Bioinformatics, 5, 110.
Altschul,F., Gish,W., Miller,W., Myers,E., Lipman,D. (1990) A... | |
## Bayesian Approach To Rough Set
Tshilidzi Marwala and Bodie Crossingham University of the Witwatersrand Private Bag x3 Wits, 2050 South Africa e-mail: t.marwala@ee.wits.ac.za This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. T... | |
differences in posterior probabilities between two states that are in transition (Metropolis et al., 1953). This algorithm ensures that states with high probability form the majority of the Markov chain and is mathematically represented as:
$$(14)$$
If ) ( | ) ( | P Mn+1 D > P Mn D **then accept** Mn+1
, (14)
els... | |
autoencoder method, it is disadvantageous due to its "black box" nature which is that it is not transparent. To improve transparency Bayesian rough set theory (RST) is proposed to forecast and interpret the causal effects of HIV. Rough set have been used in various biomedical and engineering applications (Ohrn, 1999; P... | |
number was chosen as it gave a good balance between computational efficiency and accuracy. The parents' ages are given and discretised accordingly, education is given as an integer, where 13 is the highest level of education, indicating tertiary education.
Gravidity is defined as the number of times that a woman has ... | |
plausibility that a person is HIV positive and -1 indicating 100% plausibility of HIV
negative. When training the rough set models using Markov Chain Monte Carlo, 500 samples were accepted and retained meaning that 500 sets of rules where each set contained 50 up to 550 numbers of rules with an average of 222 rules as... | |
Lower Approximation Rules 1. If Race = African and Mothers Age = 23 and Education = 4 and Gravidity = 2 and Parity = 1 and Fathers Age = 20 Then HIV = Most Probably Positive 2. If Race = Asian and Mothers Age = 30 and Education = 13 and Gravidity = 1 and Parity = 1 and Fathers Age = 33 Then HIV = Most Probably Negative... | |
2. **Deja, A., Peszek, P., 2003. Applying rough set theory to multi stage medical**
diagnosing. Fundamenta Informaticae, 54, 387–408.
3. **Department of Health, 2001. National HIV and syphilis sero-prevalence survey of**
women attending public antenatal clinics in South Africa. http://www.info.gov.za/otherdocs/2002... | |
10. **Lasry, G., Zaric, S., Carter, M.W., 2007. Multi-level resource allocation for HIV**
prevention: A model for developing countries. European Journal of Operational Research, 180, 786-799.
11. **Leke, B.B., 2007. Computational Intelligence for Modelling HIV. Ph.D. Thesis,**
School of Electrical & Information Eng... | |
18. **Nishino T., Nagamachi M., Tanaka H., 2006. Variable precision Bayesian rough set**
model and its application to human evaluation data. Proceedings of SPIE - The International Society for Optical Engineering, 6104, 294-303.
19. **Ohrn, A., 1999. Discernibility and Rough set in Medicine: Tools and Applications.*... | |
learning and decision analysis. Rough set are useful in the analysis of decisions in which there are inconsistencies. To cope with these inconsistencies, lower and upper approximations of decision classes are defined (Inuiguchib and Miyajima, 2006). Rough set theory is often contrasted to compete with fuzzy set theory ... | |
27. **Witlox, F., Tindemans, H., 2004. The application of rough set analysis in activity**
based modeling: Opportunities and constraints. Expert Systems with Applications, 27, 585-592. | |
Once the information table is obtained, the data is discretised into partitions as mentioned earlier. An information system can be understood by a pairΛ = (U, A**), where U and A,** are finite, non-empty sets called the universe, and the set of attributes, respectively (Deja and Peszek, 2003). For every attribute a an ... | |
set of cases we want to approximate (Ohrn and Rowland, 2006). The lower approximation of set X is denoted BX **and is mathematically represented as:**
$\eqref{eq:walpha}$.
BX = {x∈U : B(x) ⊆ X} **(3)**
The upper approximation is defined as the collection of cases whose equivalence classes are at least partially con... | |
Rough Set Accuracy The accuracy of rough set provides a measure of how closely the rough set is approximating the target set. It is defined as the ratio of the number of objects which can be positively placed in X to the number of objects that can be possibly placed in X**. In** other words it is defined as the number ... | |
created rules on a test set to estimate the prediction error of the rough set model. The equation representing the mapping between the inputs to the output using rough set can be written as y = f (G,N,R**) (7)**
where y is the output, G **is the granulization of the input space into high, low, medium** etc, N is the n... | |
evidence. The evidence can be treated as the normalization constant and therefore is ignored in this paper. The likelihood function may be estimated as follows:
$$P(D\,|\,M)=\frac{1}{z_{1}}\mathrm{exp}(-e r r o r)=\frac{1}{z_{1}}\mathrm{exp}\{A(N,R,G)-1\}$$
$$(9)$$
P D M **(9)**
Here 1 z **is the normalization cons... | |
of large samples of granules for the input space, which in many cases is not computationally efficient. The MCMC creates a chain of granules and accepts or rejects them using Metropolis algorithm. The application of Bayesian approach and MCMC
rough set, results in the probability distribution function of the granules,... | |
# Comparing Robustness Of Pairwise And Multiclass Neural-Network Systems For Face Recognition
J. Uglov, V. Schetinin, C. Maple Computing and Information System Department, University of Bedfordshire, Luton, UK
Abstract. Noise, corruptions and variations in face images can seriously hurt the performance of face recogni... | |
noise components drawn from a Gaussian density function with zero mean and the standard
<image>
<image> deviation alpha = 0.5.
From this plot we can observe that the noise components corrupt the boundary of the given classes, and therefore the performance of a face recognition system can be affected. From these plot... | |
<image>
Now we can combine hyperplanes f1/2, f1/3 and f2/3 to build up the new dividing hyperplanes g1, g2, and g3. The first hyperplane g1 combines the functions f1/2 and f1/3 so that g1 = f1/2 + f1/3. These functions are taken with weights of 1.0 because both functions f1/2 and f1/3 give the positive output values... | |
## 4. Experiments
The goal of our experiments is to compare the robustness of the proposed pairwise and standard multiclass neural-network systems on the Cambridge ORL face image data set [5] (in a full paper, the experiments will run on different face image data sets). To estimate the robustness we add noise compone... | |
## 5. Conclusion
We have proposed a pairwise neural-network system for face recognition in order to reduce the negative effect of noise and corruptions in face images. Within such a classification scheme we expect that the improvement in the performance can be achieved on the base of our observation that boundaries b... | |
# Ensemble Learning For Free With Evolutionary Algorithms ?
Christian Gagn´e
∗
Informatique WGZ Inc.,
819 avenue Monk, Qu´ebec (QC), G1S 3M9, Canada.
christian.gagne@wgz.ca
## Abstract
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptio... | |
as a pool for recruiting the elements of an ensemble, enabling "Ensemble Learning for Free". Previous work along this line will be described in Section 2, mostly based on using an evolutionary algorithm as weak learner [17], or using evolutionary diversity-enforcing heuristics [16, 18].
In this paper, the "Evolutionar... | |
only or all along evolution.
Two EEL frameworks have been designed to study these interdependent issues. The first one dubbed Offline Evolutionary Ensemble Learning (Off**-EEL) constructs the ensemble from the final population only. The second one, called**
Online Evolutionary Ensemble Learning (On**-EEL), gradually c... | |
plane is amenable to quadratic optimization, with:
1. Let P1 **be the first evolutionary population, and** h
∗
the classifier with minimal error rate on E.
2. L1 = Ensemble-Selection(P1, E, {h
∗
})
3. For t = 2 **. . . T** :
(a) Evolve Pt−1 → Pt, using Lt−1 **as reference set.**
(b) Lt = Ensemble-Selection(Pt , E, L... | |
| Table 1: UCI datasets used for the experimentations. | | | | |
|--------------------------------------------------------|------|----------|---------|--------... | |
| Measure | LMS | GA | Boosting | Off-EEL | On-EEL |
|--------------------|--------------|--------------|----------------|--------------|-----------------|
| bcw | | | | | |
| Train er... | |
stability of the learning process.
The two EEL frameworks investigated in this paper can be considered as promising. Off**-EEL constructs ensembles** with best performances while needing little modifications over a traditional evolutionary algorithm, with a diversityenhancing fitness and the construction of an ensembl... | |
# Fault Classification In Cylinders Using Multilayer Perceptrons, Support Vector Machines And Guassian Mixture Models
Tshilidzi Marwalaa, Unathi Maholaa **and Snehashish Chakraverty**b a
*School of Electrical and Information Engineering* University of the Witwatersrand Private Bag x 3 Wits 2050 South Africa e-mail: t.... | |
Materials & Structures, 10**: 540-547, 2001.**
[10]S. Haykin. *Neural networks***. Prentice-Hall, Inc, New York, USA, 1995.**
[11]G.E. Hinton. Learning translation invariant recognition in massively parallel networks. **Proceedings PARLE Conference on Parallel Architectures and**
Languages, **1-13, 1987.**
[12]M. M... | |
shells [2,3,4]. The importance of fault identification process in a population of nominally identical structures is particularly important in areas such as the automated manufacturing process in the assembly line. Thus far various forms of neural networks such as MLP and Bayesian neural networks have been successfully ... | |
However, assuming that we can only access the feature space using only dot products, equation 7 is transformed into a dual optimization problem by introducing Lagrangian multipliers αi**,i = 1, 2, ..., n and doing some minimisation, maximisation and saddle**
point property of the optimal point [14,15,16] the problem b... | |
the EM algorithm is used since it has reasonable fast computational time when compared to other algorithms. The EM algorithm finds the optimum model parameters by iteratively refining GMM parameters to increase the likelihood of the estimated model for the given fault feature modal vector. For the EM equations for trai... | |
[M]{ }''X + [C]{ }'X + [K]}{X} = { }F **(17)**
where [M], [C] and [K] are the mass, damping and stiffness matrices respectively, and
{X}, {X′} and {X′′**} are the displacement, velocity and acceleration vectors,**
respectively, while {F} is the applied force vector. If equation 17 is transformed into the modal domai... | |
For one cylinder the first type of fault is a zero-fault scenario. This type of fault is given the identity [0 0 0], indicating that there are no faults in any of the three substructures. The second type of fault is a one-fault-scenario, where a hole may be located in any of the three substructures. Three possible one-... | |
by the total number of cases can be misleading. This is the case if the fault cases classified incorrectly are those from the less numerous cases. To remedy this situation a measure of accuracy called geometrical accuracy (GA) is used and defined as:
$$\mathrm{G}\mathrm{A}=$$
$\left(19\right)^2$
1 n 1 n q **...q**
c ... | |
Table 4. **The confusion matrix obtained when the GMM network is used for fault classification**
**Predicted**
| | [000] | [100] | [010] | [001] | [110] | [101] | [011] | [111] | |
|--------|---------|---------|---------|---------|---------|---------|---------|---------|----|
| | [0... | |
# Learning To Bluff Evan Hurwitz And Tshilidzi Marwala
Abstract— The act of bluffing confounds game designers to this day. The very nature of bluffing is even open for debate, adding further complication to the process of creating intelligent virtual players that can bluff, and hence play, realistically. Through the u... | |
## Iii. Lerpa Mam
As with any optimisation system, very careful consideration needs to be taken with regards to how the system is structured, since the implications of these decisions can often result in unintentional assumptions made by the system created. With this in mind, the Lerpa Multi-Agent System
(MAS) has b... | |
scheme would in fact allow for repeated cards, and hence 403 options would be available). The first thing to notice is that only a single deck of cards is being used, hence no card can ever be repeated in a hand. Acting on this principle, consistent ordering of the hand means that the base dimensionality of the hand is... | |
$$B=P(O=2)$$
B = P(O = 2) (3)
$$C=P(O=1)$$ $$D=P(O=-3)$$
C = P(O = 1) (4)
D = P(O = −3) (5)
The internal structure of the neural network uses a standard sigmoidal activation function [7], which is suitable for stability issues and still allows for the freedom expected from a neural network. The sigmoidal activations... | |
## B. Parameter Optimisation
A number of parameters need to be optimised, in order to optimise the learning of the agents. These parameters are the learning-rate α, the memory parameter λ and the exploration parameter ε. The multi-agent system provides a perfect environment for this testing, since four different para... | |
# Soft Constraint Abstraction Based On Semiring Homomorphism ∗
Sanjiang Li and Mingsheng Ying †
Department of Computer Science and Technology Tsinghua University, Beijing 100084, China
## Abstract
The semiring-based constraint satisfaction problems (semiring CSPs),
proposed by Bistarelli, Montanari and Rossi [3], is... | |
The next lemma shows that α **preserves optimal solutions only if it is a**
semiring homomorphism.
Lemma 4.3. Let α be a mapping from c-semiring S to c-semiring Se *such that* α(0) = 0e, α(1) = 1e. Suppose α : S → Se *preserves optimal solutions. Then* α is a semiring homomorphism. Proof. By Lemma 4.1, we know α **sat... | |
## 5 Computing Concrete Optimal Solutions From Abstract Ones
In the above section, we investigated conditions under which all **optimal solutions of concrete problem can be related** *precisely* **to those of abstract problem.**
There are often situations where it suffices to find *some* **optimal solutions or simply... | |
Now we show α(v) ≤Se ve.
$\alpha$ we show $\alpha(v)$ -$\alpha$. By $v<_{S}\overline{v}$, we have $\alpha(v)\leq_{S}\alpha(\overline{v})$. Then
$$\widetilde{v}=\widetilde{\sum_{i=1}^{n}\prod_{j=1}^{m}}\alpha(v_{ij})$$ $$=\alpha(\sum_{i=1}^{n}\prod_{j=1}^{m}v_{ij})$$ $$=\alpha(v)\leq_{\widetilde{S}}\alpha(\overline{v})... | |
Set t = (d2, d2). Clearly, t is an optimal solution of α(P**) with value** {q} in α(P), and value ∅ in P. Notice that t¯= (d1, d1**) is the unique optimal solution**
of P. Since α({a}) = {p} 6⊆ {q}, there is no optimal solution tˆ of P **such that** α(tˆ) ⊆ {q}.
## 6 Related Work
Our abstraction framework is closely ... |