pytorch_pfn_extras.nn.modules.lazy_linear.UninitializedParameter#

class pytorch_pfn_extras.nn.modules.lazy_linear.UninitializedParameter(data=None, requires_grad=True)#

Bases: Parameter

Methods

__init__()

abs()

See torch.abs()

abs_()

In-place version of abs()

absolute()

Alias for abs()

absolute_()

In-place version of absolute() Alias for abs_()

acos()

See torch.acos()

acos_()

In-place version of acos()

acosh()

See torch.acosh()

acosh_()

In-place version of acosh()

add(other, *[, alpha])

Add a scalar or tensor to self tensor.

add_(other, *[, alpha])

In-place version of add()

addbmm(batch1, batch2, *[, beta, alpha])

See torch.addbmm()

addbmm_(batch1, batch2, *[, beta, alpha])

In-place version of addbmm()

addcdiv(tensor1, tensor2, *[, value])

See torch.addcdiv()

addcdiv_(tensor1, tensor2, *[, value])

In-place version of addcdiv()

addcmul(tensor1, tensor2, *[, value])

See torch.addcmul()

addcmul_(tensor1, tensor2, *[, value])

In-place version of addcmul()

addmm(mat1, mat2, *[, beta, alpha])

See torch.addmm()

addmm_(mat1, mat2, *[, beta, alpha])

In-place version of addmm()

addmv(mat, vec, *[, beta, alpha])

See torch.addmv()

addmv_(mat, vec, *[, beta, alpha])

In-place version of addmv()

addr(vec1, vec2, *[, beta, alpha])

See torch.addr()

addr_(vec1, vec2, *[, beta, alpha])

In-place version of addr()

adjoint()

Alias for adjoint()

align_as(other)

Permutes the dimensions of the self tensor to match the dimension order in the other tensor, adding size-one dims for any new names.

align_to(*names)

Permutes the dimensions of the self tensor to match the order specified in names, adding size-one dims for any new names.

all([dim, keepdim])

See torch.all()

allclose(other[, rtol, atol, equal_nan])

See torch.allclose()

amax([dim, keepdim])

See torch.amax()

amin([dim, keepdim])

See torch.amin()

aminmax(*[, dim, keepdim])

See torch.aminmax()

angle()

See torch.angle()

any([dim, keepdim])

See torch.any()

apply_(callable)

Applies the function callable to each element in the tensor, replacing each element with the value returned by callable.

arccos()

See torch.arccos()

arccos_()

In-place version of arccos()

arccosh

acosh() -> Tensor

arccosh_

acosh_() -> Tensor

arcsin()

See torch.arcsin()

arcsin_()

In-place version of arcsin()

arcsinh()

See torch.arcsinh()

arcsinh_()

In-place version of arcsinh()

arctan()

See torch.arctan()

arctan2(other)

See torch.arctan2()

arctan2_

atan2_(other) -> Tensor

arctan_()

In-place version of arctan()

arctanh()

See torch.arctanh()

arctanh_(other)

In-place version of arctanh()

argmax([dim, keepdim])

See torch.argmax()

argmin([dim, keepdim])

See torch.argmin()

argsort([dim, descending])

See torch.argsort()

argwhere()

See torch.argwhere()

as_strided(size, stride[, storage_offset])

See torch.as_strided()

as_strided_(size, stride[, storage_offset])

In-place version of as_strided()

as_strided_scatter(src, size, stride[, ...])

See torch.as_strided_scatter()

as_subclass(cls)

Makes a cls instance with the same data pointer as self.

asin()

See torch.asin()

asin_()

In-place version of asin()

asinh()

See torch.asinh()

asinh_()

In-place version of asinh()

atan()

See torch.atan()

atan2(other)

See torch.atan2()

atan2_(other)

In-place version of atan2()

atan_()

In-place version of atan()

atanh()

See torch.atanh()

atanh_(other)

In-place version of atanh()

backward([gradient, retain_graph, ...])

Computes the gradient of current tensor wrt graph leaves.

baddbmm(batch1, batch2, *[, beta, alpha])

See torch.baddbmm()

baddbmm_(batch1, batch2, *[, beta, alpha])

In-place version of baddbmm()

bernoulli(*[, generator])

Returns a result tensor where each \(\texttt{result[i]}\) is independently sampled from \(\text{Bernoulli}(\texttt{self[i]})\).

bernoulli_([p, generator])

Fills each location of self with an independent sample from \(\text{Bernoulli}(\texttt{p})\).

bfloat16([memory_format])

self.bfloat16() is equivalent to self.to(torch.bfloat16).

bincount([weights, minlength])

See torch.bincount()

bitwise_and()

See torch.bitwise_and()

bitwise_and_()

In-place version of bitwise_and()

bitwise_left_shift(other)

See torch.bitwise_left_shift()

bitwise_left_shift_(other)

In-place version of bitwise_left_shift()

bitwise_not()

See torch.bitwise_not()

bitwise_not_()

In-place version of bitwise_not()

bitwise_or()

See torch.bitwise_or()

bitwise_or_()

In-place version of bitwise_or()

bitwise_right_shift(other)

See torch.bitwise_right_shift()

bitwise_right_shift_(other)

In-place version of bitwise_right_shift()

bitwise_xor()

See torch.bitwise_xor()

bitwise_xor_()

In-place version of bitwise_xor()

bmm(batch2)

See torch.bmm()

bool([memory_format])

self.bool() is equivalent to self.to(torch.bool).

broadcast_to(shape)

See torch.broadcast_to().

byte([memory_format])

self.byte() is equivalent to self.to(torch.uint8).

cauchy_([median, sigma, generator])

Fills the tensor with numbers drawn from the Cauchy distribution:

ccol_indices

cdouble([memory_format])

self.cdouble() is equivalent to self.to(torch.complex128).

ceil()

See torch.ceil()

ceil_()

In-place version of ceil()

cfloat([memory_format])

self.cfloat() is equivalent to self.to(torch.complex64).

chalf([memory_format])

self.chalf() is equivalent to self.to(torch.complex32).

char([memory_format])

self.char() is equivalent to self.to(torch.int8).

cholesky([upper])

See torch.cholesky()

cholesky_inverse([upper])

See torch.cholesky_inverse()

cholesky_solve(input2[, upper])

See torch.cholesky_solve()

chunk(chunks[, dim])

See torch.chunk()

clamp([min, max])

See torch.clamp()

clamp_([min, max])

In-place version of clamp()

clamp_max

clamp_max_

clamp_min

clamp_min_

clip([min, max])

Alias for clamp().

clip_([min, max])

Alias for clamp_().

clone(*[, memory_format])

See torch.clone()

coalesce()

Returns a coalesced copy of self if self is an uncoalesced tensor.

col_indices()

Returns the tensor containing the column indices of the self tensor when self is a sparse CSR tensor of layout sparse_csr.

conj()

See torch.conj()

conj_physical()

See torch.conj_physical()

conj_physical_()

In-place version of conj_physical()

contiguous([memory_format])

Returns a contiguous in memory tensor containing the same data as self tensor.

copy_(src[, non_blocking])

Copies the elements from src into self tensor and returns self.

copysign(other)

See torch.copysign()

copysign_(other)

In-place version of copysign()

corrcoef()

See torch.corrcoef()

cos()

See torch.cos()

cos_()

In-place version of cos()

cosh()

See torch.cosh()

cosh_()

In-place version of cosh()

count_nonzero([dim])

See torch.count_nonzero()

cov(*[, correction, fweights, aweights])

See torch.cov()

cpu([memory_format])

Returns a copy of this object in CPU memory.

cross(other[, dim])

See torch.cross()

crow_indices()

Returns the tensor containing the compressed row indices of the self tensor when self is a sparse CSR tensor of layout sparse_csr.

cuda([device, non_blocking, memory_format])

Returns a copy of this object in CUDA memory.

cummax(dim)

See torch.cummax()

cummin(dim)

See torch.cummin()

cumprod(dim[, dtype])

See torch.cumprod()

cumprod_(dim[, dtype])

In-place version of cumprod()

cumsum(dim[, dtype])

See torch.cumsum()

cumsum_(dim[, dtype])

In-place version of cumsum()

data_ptr()

Returns the address of the first element of self tensor.

deg2rad()

See torch.deg2rad()

deg2rad_()

In-place version of deg2rad()

dense_dim()

Return the number of dense dimensions in a sparse tensor self.

dequantize()

Given a quantized Tensor, dequantize it and return the dequantized float Tensor.

det()

See torch.det()

detach

Returns a new Tensor, detached from the current graph.

detach_

Detaches the Tensor from the graph that created it, making it a leaf.

diag([diagonal])

See torch.diag()

diag_embed([offset, dim1, dim2])

See torch.diag_embed()

diagflat([offset])

See torch.diagflat()

diagonal([offset, dim1, dim2])

See torch.diagonal()

diagonal_scatter(src[, offset, dim1, dim2])

See torch.diagonal_scatter()

diff([n, dim, prepend, append])

See torch.diff()

digamma()

See torch.digamma()

digamma_()

In-place version of digamma()

dim()

Returns the number of dimensions of self tensor.

dim_order()

Returns a tuple of int describing the dim order or physical layout of self.

dist(other[, p])

See torch.dist()

div(value, *[, rounding_mode])

See torch.div()

div_(value, *[, rounding_mode])

In-place version of div()

divide(value, *[, rounding_mode])

See torch.divide()

divide_(value, *[, rounding_mode])

In-place version of divide()

dot(other)

See torch.dot()

double([memory_format])

self.double() is equivalent to self.to(torch.float64).

dsplit(split_size_or_sections)

See torch.dsplit()

eig([eigenvectors])

element_size()

Returns the size in bytes of an individual element.

eq(other)

See torch.eq()

eq_(other)

In-place version of eq()

equal(other)

See torch.equal()

erf()

See torch.erf()

erf_()

In-place version of erf()

erfc()

See torch.erfc()

erfc_()

In-place version of erfc()

erfinv()

See torch.erfinv()

erfinv_()

In-place version of erfinv()

exp()

See torch.exp()

exp2()

See torch.exp2()

exp2_()

In-place version of exp2()

exp_()

In-place version of exp()

expand(*sizes)

Returns a new view of the self tensor with singleton dimensions expanded to a larger size.

expand_as(other)

Expand this tensor to the same size as other.

expm1()

See torch.expm1()

expm1_()

In-place version of expm1()

exponential_([lambd, generator])

Fills self tensor with elements drawn from the PDF (probability density function):

fill_(value)

Fills self tensor with the specified value.

fill_diagonal_(fill_value[, wrap])

Fill the main diagonal of a tensor that has at least 2-dimensions.

fix()

See torch.fix().

fix_()

In-place version of fix()

flatten([start_dim, end_dim])

See torch.flatten()

flip(dims)

See torch.flip()

fliplr()

See torch.fliplr()

flipud()

See torch.flipud()

float([memory_format])

self.float() is equivalent to self.to(torch.float32).

float_power(exponent)

See torch.float_power()

float_power_(exponent)

In-place version of float_power()

floor()

See torch.floor()

floor_()

In-place version of floor()

floor_divide(value)

See torch.floor_divide()

floor_divide_(value)

In-place version of floor_divide()

fmax(other)

See torch.fmax()

fmin(other)

See torch.fmin()

fmod(divisor)

See torch.fmod()

fmod_(divisor)

In-place version of fmod()

frac()

See torch.frac()

frac_()

In-place version of frac()

frexp(input)

See torch.frexp()

gather(dim, index)

See torch.gather()

gcd(other)

See torch.gcd()

gcd_(other)

In-place version of gcd()

ge(other)

See torch.ge().

ge_(other)

In-place version of ge().

geometric_(p, *[, generator])

Fills self tensor with elements drawn from the geometric distribution:

geqrf()

See torch.geqrf()

ger(vec2)

See torch.ger()

get_device()

For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides.

greater(other)

See torch.greater().

greater_(other)

In-place version of greater().

greater_equal(other)

See torch.greater_equal().

greater_equal_(other)

In-place version of greater_equal().

gt(other)

See torch.gt().

gt_(other)

In-place version of gt().

half([memory_format])

self.half() is equivalent to self.to(torch.float16).

hardshrink([lambd])

See torch.nn.functional.hardshrink()

has_names

Is True if any of this tensor's dimensions are named.

heaviside(values)

See torch.heaviside()

heaviside_(values)

In-place version of heaviside()

histc([bins, min, max])

See torch.histc()

histogram(input, bins, *[, range, weight, ...])

See torch.histogram()

hsplit(split_size_or_sections)

See torch.hsplit()

hypot(other)

See torch.hypot()

hypot_(other)

In-place version of hypot()

i0()

See torch.i0()

i0_()

In-place version of i0()

igamma(other)

See torch.igamma()

igamma_(other)

In-place version of igamma()

igammac(other)

See torch.igammac()

igammac_(other)

In-place version of igammac()

index_add(dim, index, source, *[, alpha])

Out-of-place version of torch.Tensor.index_add_().

index_add_(dim, index, source, *[, alpha])

Accumulate the elements of alpha times source into the self tensor by adding to the indices in the order given in index.

index_copy(dim, index, tensor2)

Out-of-place version of torch.Tensor.index_copy_().

index_copy_(dim, index, tensor)

Copies the elements of tensor into the self tensor by selecting the indices in the order given in index.

index_fill(dim, index, value)

Out-of-place version of torch.Tensor.index_fill_().

index_fill_(dim, index, value)

Fills the elements of the self tensor with value value by selecting the indices in the order given in index.

index_put(indices, values[, accumulate])

Out-place version of index_put_().

index_put_(indices, values[, accumulate])

Puts values from the tensor values into the tensor self using the indices specified in indices (which is a tuple of Tensors).

index_reduce

index_reduce_(dim, index, source, reduce, *)

Accumulate the elements of source into the self tensor by accumulating to the indices in the order given in index using the reduction given by the reduce argument.

index_select(dim, index)

See torch.index_select()

indices()

Return the indices tensor of a sparse COO tensor.

inner(other)

See torch.inner().

int([memory_format])

self.int() is equivalent to self.to(torch.int32).

int_repr()

Given a quantized Tensor, self.int_repr() returns a CPU Tensor with uint8_t as data type that stores the underlying uint8_t values of the given Tensor.

inverse()

See torch.inverse()

ipu([device, non_blocking, memory_format])

Returns a copy of this object in IPU memory.

is_coalesced()

Returns True if self is a sparse COO tensor that is coalesced, False otherwise.

is_complex()

Returns True if the data type of self is a complex data type.

is_conj()

Returns True if the conjugate bit of self is set to true.

is_contiguous([memory_format])

Returns True if self tensor is contiguous in memory in the order specified by memory format.

is_distributed

is_floating_point()

Returns True if the data type of self is a floating point data type.

is_inference()

See torch.is_inference()

is_neg()

Returns True if the negative bit of self is set to true.

is_nonzero

is_pinned

Returns true if this tensor resides in pinned memory.

is_same_size

is_set_to(tensor)

Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).

is_shared()

Checks if tensor is in shared memory.

is_signed()

Returns True if the data type of self is a signed data type.

isclose(other[, rtol, atol, equal_nan])

See torch.isclose()

isfinite()

See torch.isfinite()

isinf()

See torch.isinf()

isnan()

See torch.isnan()

isneginf()

See torch.isneginf()

isposinf()

See torch.isposinf()

isreal()

See torch.isreal()

istft(n_fft[, hop_length, win_length, ...])

See torch.istft()

item()

Returns the value of this tensor as a standard Python number.

kron(other)

See torch.kron()

kthvalue(k[, dim, keepdim])

See torch.kthvalue()

lcm(other)

See torch.lcm()

lcm_(other)

In-place version of lcm()

ldexp(other)

See torch.ldexp()

ldexp_(other)

In-place version of ldexp()

le(other)

See torch.le().

le_(other)

In-place version of le().

lerp(end, weight)

See torch.lerp()

lerp_(end, weight)

In-place version of lerp()

less

lt(other) -> Tensor

less_(other)

In-place version of less().

less_equal(other)

See torch.less_equal().

less_equal_(other)

In-place version of less_equal().

lgamma()

See torch.lgamma()

lgamma_()

In-place version of lgamma()

log()

See torch.log()

log10()

See torch.log10()

log10_()

In-place version of log10()

log1p()

See torch.log1p()

log1p_()

In-place version of log1p()

log2()

See torch.log2()

log2_()

In-place version of log2()

log_()

In-place version of log()

log_normal_([mean, std, generator])

Fills self tensor with numbers samples from the log-normal distribution parameterized by the given mean \(\mu\) and standard deviation \(\sigma\).

log_softmax

logaddexp(other)

See torch.logaddexp()

logaddexp2(other)

See torch.logaddexp2()

logcumsumexp(dim)

See torch.logcumsumexp()

logdet()

See torch.logdet()

logical_and()

See torch.logical_and()

logical_and_()

In-place version of logical_and()

logical_not()

See torch.logical_not()

logical_not_()

In-place version of logical_not()

logical_or()

See torch.logical_or()

logical_or_()

In-place version of logical_or()

logical_xor()

See torch.logical_xor()

logical_xor_()

In-place version of logical_xor()

logit()

See torch.logit()

logit_()

In-place version of logit()

logsumexp(dim[, keepdim])

See torch.logsumexp()

long([memory_format])

self.long() is equivalent to self.to(torch.int64).

lstsq(other)

lt(other)

See torch.lt().

lt_(other)

In-place version of lt().

lu([pivot, get_infos])

See torch.lu()

lu_solve(LU_data, LU_pivots)

See torch.lu_solve()

map2_

map_(tensor, callable)

Applies callable for each element in self tensor and the given tensor and stores the results in self tensor.

masked_fill(mask, value)

Out-of-place version of torch.Tensor.masked_fill_()

masked_fill_(mask, value)

Fills elements of self tensor with value where mask is True.

masked_scatter(mask, tensor)

Out-of-place version of torch.Tensor.masked_scatter_()

masked_scatter_(mask, source)

Copies elements from source into self tensor at positions where the mask is True.

masked_select(mask)

See torch.masked_select()

materialize(shape[, device, dtype])

Create a Parameter with the same properties of the uninitialized one.

matmul(tensor2)

See torch.matmul()

matrix_exp()

See torch.matrix_exp()

matrix_power(n)

Note

matrix_power() is deprecated, use torch.linalg.matrix_power() instead.

max([dim, keepdim])

See torch.max()

maximum(other)

See torch.maximum()

mean([dim, keepdim, dtype])

See torch.mean()

median([dim, keepdim])

See torch.median()

min([dim, keepdim])

See torch.min()

minimum(other)

See torch.minimum()

mm(mat2)

See torch.mm()

mode([dim, keepdim])

See torch.mode()

moveaxis(source, destination)

See torch.moveaxis()

movedim(source, destination)

See torch.movedim()

msort()

See torch.msort()

mul(value)

See torch.mul().

mul_(value)

In-place version of mul().

multinomial(num_samples[, replacement, ...])

See torch.multinomial()

multiply(value)

See torch.multiply().

multiply_(value)

In-place version of multiply().

mv(vec)

See torch.mv()

mvlgamma(p)

See torch.mvlgamma()

mvlgamma_(p)

In-place version of mvlgamma()

nan_to_num([nan, posinf, neginf])

See torch.nan_to_num().

nan_to_num_([nan, posinf, neginf])

In-place version of nan_to_num().

nanmean([dim, keepdim, dtype])

See torch.nanmean()

nanmedian([dim, keepdim])

See torch.nanmedian()

nanquantile(q[, dim, keepdim, interpolation])

See torch.nanquantile()

nansum([dim, keepdim, dtype])

See torch.nansum()

narrow(dimension, start, length)

See torch.narrow().

narrow_copy(dimension, start, length)

See torch.narrow_copy().

ndimension()

Alias for dim()

ne(other)

See torch.ne().

ne_(other)

In-place version of ne().

neg()

See torch.neg()

neg_()

In-place version of neg()

negative()

See torch.negative()

negative_()

In-place version of negative()

nelement()

Alias for numel()

new

new_empty(size, *[, dtype, device, ...])

Returns a Tensor of size size filled with uninitialized data.

new_empty_strided(size, stride[, dtype, ...])

Returns a Tensor of size size and strides stride filled with uninitialized data.

new_full(size, fill_value, *[, dtype, ...])

Returns a Tensor of size size filled with fill_value.

new_ones(size, *[, dtype, device, ...])

Returns a Tensor of size size filled with 1.

new_tensor(data, *[, dtype, device, ...])

Returns a new Tensor with data as the tensor data.

new_zeros(size, *[, dtype, device, ...])

Returns a Tensor of size size filled with 0.

nextafter(other)

See torch.nextafter()

nextafter_(other)

In-place version of nextafter()

nonzero()

See torch.nonzero()

nonzero_static(input, *, size[, fill_value])

Returns a 2-D tensor where each row is the index for a non-zero value.

norm([p, dim, keepdim, dtype])

See torch.norm()

normal_([mean, std, generator])

Fills self tensor with elements samples from the normal distribution parameterized by mean and std.

not_equal(other)

See torch.not_equal().

not_equal_(other)

In-place version of not_equal().

numel()

See torch.numel()

numpy(*[, force])

Returns the tensor as a NumPy ndarray.

orgqr(input2)

See torch.orgqr()

ormqr(input2, input3[, left, transpose])

See torch.ormqr()

outer(vec2)

See torch.outer().

permute(*dims)

See torch.permute()

pin_memory()

Copies the tensor to pinned memory, if it's not already pinned.

pinverse()

See torch.pinverse()

polygamma(n)

See torch.polygamma()

polygamma_(n)

In-place version of polygamma()

positive()

See torch.positive()

pow(exponent)

See torch.pow()

pow_(exponent)

In-place version of pow()

prelu

prod([dim, keepdim, dtype])

See torch.prod()

put(input, index, source[, accumulate])

Out-of-place version of torch.Tensor.put_().

put_(index, source[, accumulate])

Copies the elements from source into the positions specified by index.

q_per_channel_axis()

Given a Tensor quantized by linear (affine) per-channel quantization, returns the index of dimension on which per-channel quantization is applied.

q_per_channel_scales()

Given a Tensor quantized by linear (affine) per-channel quantization, returns a Tensor of scales of the underlying quantizer.

q_per_channel_zero_points()

Given a Tensor quantized by linear (affine) per-channel quantization, returns a tensor of zero_points of the underlying quantizer.

q_scale()

Given a Tensor quantized by linear(affine) quantization, returns the scale of the underlying quantizer().

q_zero_point()

Given a Tensor quantized by linear(affine) quantization, returns the zero_point of the underlying quantizer().

qr([some])

See torch.qr()

qscheme()

Returns the quantization scheme of a given QTensor.

quantile(q[, dim, keepdim, interpolation])

See torch.quantile()

rad2deg()

See torch.rad2deg()

rad2deg_()

In-place version of rad2deg()

random_([from, to, generator])

Fills self tensor with numbers sampled from the discrete uniform distribution over [from, to - 1].

ravel()

see torch.ravel()

reciprocal()

See torch.reciprocal()

reciprocal_()

In-place version of reciprocal()

record_stream(stream)

Marks the tensor as having been used by this stream.

refine_names(*names)

Refines the dimension names of self according to names.

register_hook(hook)

Registers a backward hook.

register_post_accumulate_grad_hook(hook)

Registers a backward hook that runs after grad accumulation.

reinforce(reward)

relu

relu_

remainder(divisor)

See torch.remainder()

remainder_(divisor)

In-place version of remainder()

rename(*names, **rename_map)

Renames dimension names of self.

rename_(*names, **rename_map)

In-place version of rename().

renorm(p, dim, maxnorm)

See torch.renorm()

renorm_(p, dim, maxnorm)

In-place version of renorm()

repeat(*sizes)

Repeats this tensor along the specified dimensions.

repeat_interleave(repeats[, dim, output_size])

See torch.repeat_interleave().

requires_grad_([requires_grad])

Change if autograd should record operations on this tensor: sets this tensor's requires_grad attribute in-place.

reshape(*shape)

Returns a tensor with the same data and number of elements as self but with the specified shape.

reshape_as(other)

Returns this tensor as the same shape as other.

resize(*sizes)

resize_(*sizes[, memory_format])

Resizes self tensor to the specified size.

resize_as(tensor)

resize_as_(tensor[, memory_format])

Resizes the self tensor to be the same size as the specified tensor.

resize_as_sparse_

resolve_conj()

See torch.resolve_conj()

resolve_neg()

See torch.resolve_neg()

retain_grad()

Enables this Tensor to have their grad populated during backward().

roll(shifts, dims)

See torch.roll()

rot90(k, dims)

See torch.rot90()

round([decimals])

See torch.round()

round_([decimals])

In-place version of round()

row_indices

rsqrt()

See torch.rsqrt()

rsqrt_()

In-place version of rsqrt()

scatter(dim, index, src)

Out-of-place version of torch.Tensor.scatter_()

scatter_(dim, index, src[, reduce])

Writes all values from the tensor src into self at the indices specified in the index tensor.

scatter_add(dim, index, src)

Out-of-place version of torch.Tensor.scatter_add_()

scatter_add_(dim, index, src)

Adds all values from the tensor src into self at the indices specified in the index tensor in a similar fashion as scatter_().

scatter_reduce(dim, index, src, reduce, *[, ...])

Out-of-place version of torch.Tensor.scatter_reduce_()

scatter_reduce_(dim, index, src, reduce, *)

Reduces all values from the src tensor to the indices specified in the index tensor in the self tensor using the applied reduction defined via the reduce argument ("sum", "prod", "mean", "amax", "amin").

select(dim, index)

See torch.select()

select_scatter(src, dim, index)

See torch.select_scatter()

set_([source, storage_offset, size, stride])

Sets the underlying storage, size, and strides.

sgn()

See torch.sgn()

sgn_()

In-place version of sgn()

share_memory_()

Moves the underlying storage to shared memory.

short([memory_format])

self.short() is equivalent to self.to(torch.int16).

sigmoid()

See torch.sigmoid()

sigmoid_()

In-place version of sigmoid()

sign()

See torch.sign()

sign_()

In-place version of sign()

signbit()

See torch.signbit()

sin()

See torch.sin()

sin_()

In-place version of sin()

sinc()

See torch.sinc()

sinc_()

In-place version of sinc()

sinh()

See torch.sinh()

sinh_()

In-place version of sinh()

size([dim])

Returns the size of the self tensor.

slice_scatter(src[, dim, start, end, step])

See torch.slice_scatter()

slogdet()

See torch.slogdet()

smm(mat)

See torch.smm()

softmax(dim)

Alias for torch.nn.functional.softmax().

solve(other)

sort([dim, descending])

See torch.sort()

sparse_dim()

Return the number of sparse dimensions in a sparse tensor self.

sparse_mask(mask)

Returns a new sparse tensor with values from a strided tensor self filtered by the indices of the sparse tensor mask.

sparse_resize_(size, sparse_dim, dense_dim)

Resizes self sparse tensor to the desired size and the number of sparse and dense dimensions.

sparse_resize_and_clear_(size, sparse_dim, ...)

Removes all specified elements from a sparse tensor self and resizes self to the desired size and the number of sparse and dense dimensions.

split(split_size[, dim])

See torch.split()

split_with_sizes

sqrt()

See torch.sqrt()

sqrt_()

In-place version of sqrt()

square()

See torch.square()

square_()

In-place version of square()

squeeze([dim])

See torch.squeeze()

squeeze_([dim])

In-place version of squeeze()

sspaddmm(mat1, mat2, *[, beta, alpha])

See torch.sspaddmm()

std([dim, correction, keepdim])

See torch.std()

stft(n_fft[, hop_length, win_length, ...])

See torch.stft()

storage()

Returns the underlying TypedStorage.

storage_offset()

Returns self tensor's offset in the underlying storage in terms of number of storage elements (not bytes).

storage_type()

Returns the type of the underlying storage.

stride(dim)

Returns the stride of self tensor.

sub(other, *[, alpha])

See torch.sub().

sub_(other, *[, alpha])

In-place version of sub()

subtract(other, *[, alpha])

See torch.subtract().

subtract_(other, *[, alpha])

In-place version of subtract().

sum([dim, keepdim, dtype])

See torch.sum()

sum_to_size(*size)

Sum this tensor to size.

svd([some, compute_uv])

See torch.svd()

swapaxes(axis0, axis1)

See torch.swapaxes()

swapaxes_(axis0, axis1)

In-place version of swapaxes()

swapdims(dim0, dim1)

See torch.swapdims()

swapdims_(dim0, dim1)

In-place version of swapdims()

symeig([eigenvectors])

t()

See torch.t()

t_()

In-place version of t()

take(indices)

See torch.take()

take_along_dim(indices, dim)

See torch.take_along_dim()

tan()

See torch.tan()

tan_()

In-place version of tan()

tanh()

See torch.tanh()

tanh_()

In-place version of tanh()

tensor_split(indices_or_sections[, dim])

See torch.tensor_split()

tile(dims)

See torch.tile()

to(*args, **kwargs)

Performs Tensor dtype and/or device conversion.

to_dense([dtype, masked_grad])

Creates a strided copy of self if self is not a strided tensor, otherwise returns self.

to_mkldnn()

Returns a copy of the tensor in torch.mkldnn layout.

to_padded_tensor(padding[, output_size])

See to_padded_tensor()

to_sparse(sparseDims)

Returns a sparse copy of the tensor.

to_sparse_bsc(blocksize, dense_dim)

Convert a tensor to a block sparse column (BSC) storage format of given blocksize.

to_sparse_bsr(blocksize, dense_dim)

Convert a tensor to a block sparse row (BSR) storage format of given blocksize.

to_sparse_coo()

Convert a tensor to coordinate format.

to_sparse_csc()

Convert a tensor to compressed column storage (CSC) format.

to_sparse_csr([dense_dim])

Convert a tensor to compressed row storage format (CSR).

tolist()

Returns the tensor as a (nested) list.

topk(k[, dim, largest, sorted])

See torch.topk()

trace()

See torch.trace()

transpose(dim0, dim1)

See torch.transpose()

transpose_(dim0, dim1)

In-place version of transpose()

triangular_solve(A[, upper, transpose, ...])

See torch.triangular_solve()

tril([diagonal])

See torch.tril()

tril_([diagonal])

In-place version of tril()

triu([diagonal])

See torch.triu()

triu_([diagonal])

In-place version of triu()

true_divide(value)

See torch.true_divide()

true_divide_(value)

In-place version of true_divide_()

trunc()

See torch.trunc()

trunc_()

In-place version of trunc()

type([dtype, non_blocking])

Returns the type if dtype is not provided, else casts this object to the specified type.

type_as(tensor)

Returns this tensor cast to the type of the given tensor.

unbind([dim])

See torch.unbind()

unflatten(dim, sizes)

See torch.unflatten().

unfold(dimension, size, step)

Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension.

uniform_([from, to, generator])

Fills self tensor with numbers sampled from the continuous uniform distribution:

unique([sorted, return_inverse, ...])

Returns the unique elements of the input tensor.

unique_consecutive([return_inverse, ...])

Eliminates all but the first element from every consecutive group of equivalent elements.

unsafe_chunk(chunks[, dim])

See torch.unsafe_chunk()

unsafe_split(split_size[, dim])

See torch.unsafe_split()

unsafe_split_with_sizes

unsqueeze(dim)

See torch.unsqueeze()

unsqueeze_(dim)

In-place version of unsqueeze()

untyped_storage()

Returns the underlying UntypedStorage.

values()

Return the values tensor of a sparse COO tensor.

var([dim, correction, keepdim])

See torch.var()

vdot(other)

See torch.vdot()

view(*shape)

Returns a new tensor with the same data as the self tensor but of a different shape.

view_as(other)

View this tensor as the same size as other.

vsplit(split_size_or_sections)

See torch.vsplit()

where(condition, y)

self.where(condition, y) is equivalent to torch.where(condition, self, y).

xlogy(other)

See torch.xlogy()

xlogy_(other)

In-place version of xlogy()

xpu([device, non_blocking, memory_format])

Returns a copy of this object in XPU memory.

zero_()

Fills self tensor with zeros.

Attributes

H

Returns a view of a matrix (2-D tensor) conjugated and transposed.

T

Returns a view of this tensor with its dimensions reversed.

data

device

Is the torch.device where this Tensor is.

dtype

grad

This attribute is None by default and becomes a Tensor the first time a call to backward() computes gradients for self.

grad_fn

imag

Returns a new tensor containing imaginary values of the self tensor.

is_cpu

Is True if the Tensor is stored on the CPU, False otherwise.

is_cuda

Is True if the Tensor is stored on the GPU, False otherwise.

is_ipu

Is True if the Tensor is stored on the IPU, False otherwise.

is_leaf

All Tensors that have requires_grad which is False will be leaf Tensors by convention.

is_meta

Is True if the Tensor is a meta tensor, False otherwise.

is_mkldnn

is_mps

Is True if the Tensor is stored on the MPS device, False otherwise.

is_mtia

is_nested

is_ort

is_quantized

Is True if the Tensor is quantized, False otherwise.

is_sparse

Is True if the Tensor uses sparse COO storage layout, False otherwise.

is_sparse_csr

Is True if the Tensor uses sparse CSR storage layout, False otherwise.

is_vulkan

is_xla

Is True if the Tensor is stored on an XLA device, False otherwise.

is_xpu

Is True if the Tensor is stored on the XPU, False otherwise.

itemsize

Alias for element_size()

layout

mH

Accessing this property is equivalent to calling adjoint().

mT

Returns a view of this tensor with the last two dimensions transposed.

name

names

Stores names for each of this tensor's dimensions.

nbytes

Returns the number of bytes consumed by the "view" of elements of the Tensor if the Tensor does not use sparse storage layout.

ndim

Alias for dim()

output_nr

real

Returns a new tensor containing real values of the self tensor for a complex-valued input tensor.

requires_grad

Is True if gradients need to be computed for this Tensor, False otherwise.

retains_grad

Is True if this Tensor is non-leaf and its grad is enabled to be populated during backward(), False otherwise.

shape

Returns the size of the self tensor.

volatile

property is_leaf: bool#

All Tensors that have requires_grad which is False will be leaf Tensors by convention.

For Tensors that have requires_grad which is True, they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None.

Only leaf Tensors will have their grad populated during a call to backward(). To get grad populated for non-leaf Tensors, you can use retain_grad().

Example:

>>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it
materialize(shape, device=None, dtype=None)#

Create a Parameter with the same properties of the uninitialized one. Given a shape, it materializes a parameter in the same device and with the same dtype as the current one or the specified ones in the arguments.

Parameters:
  • shape (Tuple[int, ...]) – (tuple): the shape for the materialized tensor.

  • device (torch.device) – the desired device of the parameters and buffers in this module. Optional.

  • dtype (torch.dtype) – the desired floating point type of the floating point parameters and buffers in this module. Optional.

Return type:

None

share_memory_()#

Moves the underlying storage to shared memory.

This is a no-op if the underlying storage is already in shared memory and for CUDA tensors. Tensors in shared memory cannot be resized.

See torch.UntypedStorage.share_memory_() for more details.

Return type:

UninitializedParameter