qlat_utils

Qlattice utility package

Usage:

import qlat_utils as q

Will also be loaded by import qlat as q together with other qlat functions.

Message

get_verbose_level()

Return the current verbosity level as integer.

set_verbose_level([level])

Set the current verbosity level as integer.

displayln(level, *args)

Print all the arguments and then print a newline.

displayln_info(*args)

Same as displayln but only print if get_id_node() == 0.

get_fname()

Return the function name of the current function fname

Timer

timer([func])

Timing functions.

timer_verbose(func)

Timing functions.

timer_flops(func)

Timing functions with flops.

timer_verbose_flops(func)

Timing functions with flops.

timer_display(str tag=)

timer_display_stack()

timer_fork(...)

timer_merge()

get_time()

Return current time in seconds since epoch.

get_start_time()

Return start time in seconds since epoch.

Random number

RngState([x, y])

get_data_sig(x, RngState rs)

Return a signature (a floating point number, real or complex) of data viewed as a 1-D array of numbers.

Algorithm of the random number generator

The state of the generator is effectively composed of the history of the generator encoded as a string.

To generate random numbers, one computes the SHA-256 hash of the string. The hash result is viewed as a 8 32-bit unsigned integers.

The 8 32-bit unsigned integers are merged into 4 64-bit unsigned integers. These 4 numbers are treated as the random numbers generated by this random number generator.

Relevant source files: qlat-utils/include/qlat-utils/rng-state.h and qlat-utils/lib/rng-state.cpp

Coordinate

rel_mod(x, size)

Return x % size or x % size - size

rel_mod_sym(x, size)

Return x % size or x % size - size or 0

rel_mod_arr(x, size)

Return x % size or x % size - size where x and size are np.array of same shape

rel_mod_sym_arr(x, size)

Return x % size or x % size - size or 0 where x and size are np.array of same shape

Coordinate

Coordinate.to_list()

Return a list composed of the 4 components of the coordinate.

Coordinate.to_tuple()

Return a tuple composed of the 4 components of the coordinate.

Coordinate.to_numpy()

Return a np.ndarray composed of the 4 components of the coordinate.

Coordinate.from_list(x)

set value based on a list composed of the 4 components of the coordinate.

Coordinate.sqr()

Return the square sum of all the components as cc.Long.

Coordinate.r_sqr()

get spatial distance square as int

Coordinate.volume()

get product of all components

Coordinate.spatial_volume()

get product of all components

Coordinate.from_index(index, size)

Coordinate.to_index(size)

CoordinateD

CoordinateD.to_list()

Return a list composed of the 4 components of the coordinate.

CoordinateD.to_tuple()

Return a tuple composed of the 4 components of the coordinate.

Coordinate.to_numpy()

Return a np.ndarray composed of the 4 components of the coordinate.

Coordinate.from_list(x)

set value based on a list composed of the 4 components of the coordinate.

Cache system

Cache(*keys)

self.cache_keys

mk_cache(*keys[, ca])

make cache if it does not exist, otherwise return existing elements

clean_cache([ca])

Remove values of cache, but keep all the structures

list_cache([ca])

rm_cache(*keys[, ca])

remove cache if it exist

get_all_caches_info()

clear_all_caches()

Example code:

Usage:
cache_x = q.mk_cache("xx")
q.clean_cache(cache_x)
cache_x[key] = value
val = cache_x[key]
key in cache_x
val = cache_x.get(key)
val = cache_x.pop(key, None)

Matrix for QCD

WilsonMatrix

SpinMatrix

ColorMatrix

get_gamma_matrix(int mu)

wilson_matrix_g5_herm(WilsonMatrix x)

mat_tr_sm(SpinMatrix v)

mat_tr_cm(ColorMatrix v)

mat_tr_wm(WilsonMatrix v)

mat_tr_wm_wm(WilsonMatrix v1, WilsonMatrix v2)

mat_tr_wm_sm(WilsonMatrix v1, SpinMatrix v2)

mat_tr_sm_wm(SpinMatrix v1, WilsonMatrix v2)

mat_tr_sm_sm(SpinMatrix v1, SpinMatrix v2)

mat_tr_wm_cm(WilsonMatrix v1, ColorMatrix v2)

mat_tr_cm_wm(ColorMatrix v1, WilsonMatrix v2)

mat_tr_cm_cm(ColorMatrix v1, ColorMatrix v2)

mat_mul_wm_wm(WilsonMatrix v1, WilsonMatrix v2)

mat_mul_wm_sm(WilsonMatrix v1, SpinMatrix v2)

mat_mul_sm_wm(SpinMatrix v1, WilsonMatrix v2)

mat_mul_sm_sm(SpinMatrix v1, SpinMatrix v2)

mat_mul_wm_cm(WilsonMatrix v1, ColorMatrix v2)

mat_mul_cm_wm(ColorMatrix v1, WilsonMatrix v2)

mat_mul_cm_cm(ColorMatrix v1, ColorMatrix v2)

as_wilson_matrix(x)

as_wilson_matrix_g5_herm(x)

ElemType

ElemType

ElemTypeInt8t

ElemTypeInt32t

ElemTypeInt64t

ElemTypeChar

ElemTypeInt

ElemTypeLong

ElemTypeRealD

ElemTypeRealF

ElemTypeComplexD

ElemTypeComplexF

ElemTypeSpinMatrix

ElemTypeWilsonMatrix

ElemTypeColorMatrix

ElemTypeIsospinMatrix

ElemTypeNonRelWilsonMatrix

ElemTypeWilsonVector

Data analysis

get_chunk_list(total_list, *[, chunk_size, ...])

Split total_list into chunk_number chunks or chunks with chunk_size. One (and only one) of chunk_size and chunk_number should not be None. # Returns a list of chunks. Number of chunks is less or equal to chunk_number. Chunk sizes are less or equal to chunk_size. if rng_state is None: Do not randomly permute the list.

check_zero(x)

qnorm(x)

qnorm(2) == 4

Spatial distance list

mk_r_sq_list(r_sq_limit[, dimension])

mk_r_list(r_limit, *[, r_all_limit, ...])

Make a list of r values from 0 up to r_limit.

mk_interp_tuple(x, x0, x1, x_idx)

Returns (x_idx_low, x_idx_high, coef_low, coef_high,)

mk_r_sq_interp_idx_coef_list(r_list)

Return a list of tuples:

Interpolation

interp_i_arr(data_x_arr, x_arr)

return i_arr

interp(data_arr, i_arr[, axis])

return approximately data_arr[..., i_arr] if axis=-1.

interp_x(data_arr, data_x_arr, x_arr[, axis])

return interpolated_data_arr x_arr can be either an 1-D array-like object or a single number.

get_threshold_idx(arr, threshold)

return x

get_threshold_i_arr(data_arr, threshold_arr)

return i_arr

get_threshold_x_arr(data_arr, data_x_arr, ...)

return x_arr

Jackknife method

Jackknife implementation

g_mk_jk(data_list, jk_idx_list, *[, avg])

Perform (randomized) Super-Jackknife for the Jackknife data set.

g_mk_jk_val(rs_tag, val, err, *, jk_type, ...)

Create a jackknife sample with random numbers based on central value val and error err.

g_jk_avg(jk_list, **_kwargs)

Return avg of the jk_list.

g_jk_err(jk_list, *, eps, jk_type, **_kwargs)

Return err of the jk_list.

g_jk_avg_err(jk_list, **kwargs)

Return (avg, err,) of the jk_list.

g_jk_size(**kwargs)

Return number of samples for the (randomized) Super-Jackknife data set.

g_jk_blocking_func(i, jk_idx, *, ...)

Return jk_blocking_func(jk_idx).

default_g_jk_kwargs

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2).

get_jk_state(*, jk_type, eps, n_rand_sample, ...)

Currently only useful if we set

set_jk_state(state)

average(data_list)

avg_err(data_list, *[, eps, block_size])

Compute (avg, err) of data_list.

err_sum(*vs)

e.g.: q.err_sum(1.4, 2.1, 1.0) ==> 2.7147743920996454

block_data(data_list, block_size[, ...])

return the list of block averages the blocks may overlap if is_overlapping == True

fsqr(data)

Separately square real and imag part in case of complex types.

fsqrt(data)

Separately calculate the square root real and imag part in case of complex types.

jackknife(data_list, *[, eps])

Return jk[i] = avg - frac{eps}{N} (v[i] - avg) normal jackknife uses eps=1, scale the fluctuation by eps

jk_avg(jk_list)

jk_err(jk_list, *[, eps, block_size])

Return

jk_avg_err(jk_list, *[, eps, block_size])

sjackknife(data_list, jk_idx_list, *[, avg, ...])

Super jackknife.

sjk_avg(jk_list)

sjk_err(jk_list, *[, eps])

Return

sjk_avg_err(jk_list, *[, eps])

rjackknife(data_list, jk_idx_list, *[, avg, ...])

Jackknife-bootstrap hybrid resampling. Return jk_arr. len(jk_arr) == 1 + n_rand_sample distribution of jk_arr should be similar as the distribution of avg. r_{i,j} ~ N(0, 1) `` if is_normalizing_rand_sample: n_j = sum_i r_{i,j}^2 r_{i,j} <- sqrt{n_rand_sample / n_j} r_{i,j} data_list_real = [d for d in data_list if d is not None] data_arr = np.array(data_list_real, dtype=dtype) avg = average(data_arr) len(data_list_real) = n jk_arr[0] = avg jk_arr[i] = avg + sum_{j=1}^{n} (-eps/sqrt{n (n - b(i,j))}) r_{i,j} (data_list_real[j] - avg) `` where b(i,j) represent the block_size. if jk_blocking_func is provided: jk_blocking_func(i, jk_idx) => blocked jk_idx `` jk_list[i] = avg + sum_{j=1}^{n} r_{i,jk_block_func(j)} (jk_list[j] - avg) ``.

rjk_mk_jk_val(rs_tag, val, err, *[, ...])

return jk_arr n = n_rand_sample len(jk_arr) == 1 + n jk_arr[i] = val + err * r[i] for i in 1..n where r[i] ~ N(0, 1)

rjk_avg(jk_list)

rjk_err(jk_list[, eps])

Return

rjk_avg_err(rjk_list[, eps])

Example for the Jackknife-bootstrap hybrid method (described in the Jackknife method section): examples-py/jackknife-random.py

#!/usr/bin/env python3

import qlat as q
import numpy as np

q.begin_with_mpi()

q.default_g_jk_kwargs["jk_type"] = "rjk"
q.default_g_jk_kwargs["eps"] = 1
q.default_g_jk_kwargs["n_rand_sample"] = 1024
q.default_g_jk_kwargs["is_normalizing_rand_sample"] = False
q.default_g_jk_kwargs["is_apply_rand_sample_jk_idx_blocking_shift"] = True
q.default_g_jk_kwargs["block_size"] = 1
q.default_g_jk_kwargs["block_size_dict"] = {
    "job_tag_1": 1,
    "job_tag_2": 4,
}
q.default_g_jk_kwargs["rng_state"] = q.RngState("rejk")
q.default_g_jk_kwargs["all_jk_idx_set"] = set()

rs = q.RngState("seed1")
job_tag = "job_tag_1"
traj_list = list(range(20))

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_1 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_1)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

rs = q.RngState("seed2")
job_tag = "job_tag_2"
traj_list = list(range(30))

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_2 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_2)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_arr = jk_arr_1 + jk_arr_2
avg, err = q.g_jk_avg_err(jk_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_val_arr = q.g_mk_jk_val("val-tag", 1.0, 0.5)
avg, err = q.g_jk_avg_err(jk_val_arr)

q.json_results_append(f"avg", avg)
q.json_results_append(f"err", err)

jk_diff_arr = jk_arr - jk_val_arr[:, None]
avg, err = q.g_jk_avg_err(jk_diff_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

q.check_log_json(__file__, check_eps=1e-10)
q.end_with_mpi()
q.displayln_info(f"CHECK: finished successfully.")

Example for the conventional Super-Jackknife method: examples-py/jackknife-super.py

#!/usr/bin/env python3

import qlat as q
import numpy as np
import functools

q.begin_with_mpi()

job_tag_list = ['job_tag_1', 'job_tag_2', ]

@functools.lru_cache
def get_traj_list(job_tag):
    fname = q.get_fname()
    if job_tag == "job_tag_1":
        return list(range(20))
    elif job_tag == "job_tag_2":
        return list(range(30))
    else:
        raise Exception(f"{fname}: job_tag='{job_tag}'")
    return None

@functools.lru_cache
def get_all_jk_idx():
    jk_idx_list = ['avg', ]
    for job_tag in job_tag_list:
        traj_list = get_traj_list(job_tag)
        for traj in traj_list:
            jk_idx_list.append((job_tag, traj,))
    return jk_idx_list


q.default_g_jk_kwargs["jk_type"] = "super"
q.default_g_jk_kwargs["eps"] = 1
q.default_g_jk_kwargs["is_hash_jk_idx"] = True
q.default_g_jk_kwargs["jk_idx_hash_size"] = 1024
q.default_g_jk_kwargs["block_size"] = 1
q.default_g_jk_kwargs["block_size_dict"] = {
    "job_tag_1": 1,
    "job_tag_2": 4,
}
q.default_g_jk_kwargs["rng_state"] = q.RngState("rejk")
q.default_g_jk_kwargs["all_jk_idx"] = None
q.default_g_jk_kwargs["get_all_jk_idx"] = get_all_jk_idx
q.default_g_jk_kwargs["all_jk_idx_set"] = set()

rs = q.RngState("seed1")
job_tag = "job_tag_1"
traj_list = get_traj_list(job_tag)

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_1 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_1)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

rs = q.RngState("seed2")
job_tag = "job_tag_2"
traj_list = get_traj_list(job_tag)

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_2 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_2)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_arr = jk_arr_1 + jk_arr_2
avg, err = q.g_jk_avg_err(jk_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_val_arr = q.g_mk_jk_val("val-tag", 1.0, 0.5)
avg, err = q.g_jk_avg_err(jk_val_arr)

q.json_results_append(f"avg", avg)
q.json_results_append(f"err", err)

jk_diff_arr = jk_arr - jk_val_arr[:, None]
avg, err = q.g_jk_avg_err(jk_diff_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

q.check_log_json(__file__, check_eps=1e-10)
q.end_with_mpi()
q.displayln_info(f"CHECK: finished successfully.")

Example for a variant of the conventional Super-Jackknife method: examples-py/jackknife-super-hash.py

#!/usr/bin/env python3

import qlat as q
import numpy as np

q.begin_with_mpi()

q.default_g_jk_kwargs["jk_type"] = "super"
q.default_g_jk_kwargs["eps"] = 1
q.default_g_jk_kwargs["is_hash_jk_idx"] = True
q.default_g_jk_kwargs["jk_idx_hash_size"] = 1024
q.default_g_jk_kwargs["block_size"] = 1
q.default_g_jk_kwargs["block_size_dict"] = {
    "job_tag_1": 1,
    "job_tag_2": 4,
}
q.default_g_jk_kwargs["rng_state"] = q.RngState("rejk")
q.default_g_jk_kwargs["all_jk_idx"] = None
q.default_g_jk_kwargs["get_all_jk_idx"] = None
q.default_g_jk_kwargs["all_jk_idx_set"] = set()

rs = q.RngState("seed1")
job_tag = "job_tag_1"
traj_list = list(range(20))

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_1 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_1)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

rs = q.RngState("seed2")
job_tag = "job_tag_2"
traj_list = list(range(30))

data_arr = rs.g_rand_arr((len(traj_list), 5,))  # can be list or np.array
jk_arr_2 = q.g_mk_jk(data_arr, [(job_tag, traj) for traj in traj_list])
avg, err = q.g_jk_avg_err(jk_arr_2)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_arr = jk_arr_1 + jk_arr_2
avg, err = q.g_jk_avg_err(jk_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

jk_val_arr = q.g_mk_jk_val("val-tag", 1.0, 0.5)
avg, err = q.g_jk_avg_err(jk_val_arr)

q.json_results_append(f"avg", avg)
q.json_results_append(f"err", err)

jk_diff_arr = jk_arr - jk_val_arr[:, None]
avg, err = q.g_jk_avg_err(jk_diff_arr)

for i in range(len(avg)):
    q.json_results_append(f"avg[{i}]", avg[i])
    q.json_results_append(f"err[{i}]", err[i])

q.check_log_json(__file__, check_eps=1e-10)
q.end_with_mpi()
q.displayln_info(f"CHECK: finished successfully.")

Plotting

plot_save([fn, dts, cmds, lines, ...])

fn is full name of the plot or None dts is dict_datatable, e.g. { "table.txt" : [ [ 0, 1, ], [ 1, 2, ], ], } cmds is plot_cmds, e.g. [ "set key rm", "set size 1.0, 1.0 ", ] lines is plot_lines, e.g. [ "plot", "x", ].

plot_view([fn, dts, cmds, lines, ...])

Example code to make a plot: examples-py/qplot.py