2024年4月18日木曜日

B-tree

Every node has at most m children, and every node has at least m/2 children. This is known as binary.
This is the maximum number of potential search keys for each node in a B-tree.
4 is the maximum, so you have the root.
This is expansion. Therefore, your computer need to optimize your huge data.
This is back and forth.

2024年4月14日日曜日

Stochastic block model

Stochastic block model is random graphs which contain communities, subsets of nodes. There are n vertices and disjoint subsets C1・・・Cr. C is a community. P is a symmetric matrix with edge probabilities.

G(n,M)



Thi is the Erdős–Rényi random graph model which have n nodes and M edges.

G(3,2)



G(n,p)

You have more vertices. It is almost Zero.

2024年4月12日金曜日

Tschirnhaus transformation

Tschirnhaus transformation is a polynomial equation of degree (n≧2) with some nonzero intermediate coefficients.
-------------------------------------------------------------------------------------------------------

Ex.
n=3
a’1=0 and a’2=0

-------------------------------------------------------------------------------------------------------

K is a field, and P(t) is polynomial over K.

K(t)/P(t)=L



L=K(α)



α is t modulo P.

β=F(α), α=G(β)



F and G over K are polynomials. Moreover, Q is the minimal polynomial for β over K. This is the Tschirnhaus transformation of P.

L is a Galois extension of K.

2024年4月10日水曜日

Huffman coding

In your fractal data, you can compress the huge chain. It depends on frequency of occurrence, so if you use more, you can shorten the code.
You have BACE. This is 001000010100 which is 4*3=12 bits. You put A is 0 and B is 1. BACE is 10010100. This is 8 bits which is about 67% compression. MP3 and JPEG are well known.

This is the information content h (in bits) of each symbol ai with non-null probability, which are binary.
The entropy H (in bits) is the weighted sum, across all symbols ai with non-zero probability wi, of the information content of each symbol.

2024年4月8日月曜日

Jordan normal form

This is a square matrix.
A is the algebraically closed field. In mathematics, a field F is algebraically closed if every non-constant polynomial in F[x] has a root in F. Moreover, P is the regular matrix. J is called Jordan normal form.
Ae1=5e1

Ae2=e1+5e2

Ae3=e2+5e3

You can write this.

A5e1=0

A5e2=e1

A5e3=e2


e3→e2→e1→0

e is an eigenvector of A.

This is called Computational Topology.

2024年4月7日日曜日

1-2-3 Conjecture

1-2-3 Conjecture is a graph G = (V, E). It is is simple, finite, and undirected. You see the triangle. This is ∅=0.

u,v∈V(G)



u and v are any adjacent vertices. The sum of weights of edges incident to u differs from the sum of weights of edges incident to v. Then, the edges of G may be assigned weights from the set {1, 2, 3}. This is 1-2-3 Conjecture. You can color the proper vertexes.

2024年4月4日木曜日

Bernoulli Differential Equations

Higher dimensions are x^n. It is hard to capture more than 4 dimensions. It may be religion. I have studied Zen in English, but I often see words like emptiness, nothingness and absurdity.

Bernoulli Differential Equations apply chain rule to integrate this complicated differences.
This is a linear differential equation.

2024年4月3日水曜日

Hyperplane separation theorem

Misunderstanding is crucial, when their connection is empty. This abstract algebra is called Krull's separation lemma.

I∩M=∅



I is ideal, and M is multiplicative and closed.

P is the prime ideals for the integers that contain all the multiples of a given prime number, together with the zero ideal.

I⊆P



P∩M=∅



This is disjoint convex sets in higher dimensional Euclidean space. A and B are disjoint nonempty convex subsets.

[x,v]≧c and [y,v]≦c



v is a nonzero vector, and c is a real number. x is in A, and y is in B. If both sets are closed, and at least one of them is compact, then the separation can be strict. This is called Hyperplane separation theorem.