TensorFlow从图像中提取区域

  1. tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)
  2. tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)
  3. tf.extract_image_patches(images,ksizes, strides,rates,padding,name=None)
  4. tf.image.crop_and_resize(image, boxes, box_ind, crop_size, method=None, extrapolation_value=None, name=None)


tf.image.crop_to_bounding_box(image, offset_height, offset_width, target_height, target_width)

Crops an image to a specified bounding box.

This op cuts a rectangular part out of image. The top-left corner of the returned image is at offset_height, offset_width in image, and its lower-right corner is at offset_height + target_height, offset_width + target_width.

Args:
  • image: 3-D tensor with shape [height, width, channels]
  • offset_height: Vertical coordinate of the top-left corner of the result in the input.
  • offset_width: Horizontal coordinate of the top-left corner of the result in the input.
  • target_height: Height of the result.
  • target_width: Width of the result.
Returns:

3-D tensor of image with shape [target_height, target_width, channels]

Raises:
  • ValueError: If the shape of image is incompatible with the offset_* or target_* arguments, or either offset_height or offset_width is negative, or either target_height or target_width is not positive.

tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None)

Extracts a glimpse from the input tensor.

Returns a set of windows called glimpses extracted at location offsets from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise.

The result is a 4-D tensor of shape [batch_size, glimpse_height, glimpse_width, channels]. The channels and batch dimensions are the same as that of the input tensor. The height and width of the output windows are specified in the size parameter.

The argument normalized and centered controls how the windows are built:

  • If the coordinates are normalized but not centered, 0.0 and 1.0 correspond to the minimum and maximum of each height and width dimension.
  • If the coordinates are both normalized and centered, they range from -1.0 to 1.0. The coordinates (-1.0, -1.0) correspond to the upper left corner, the lower right corner is located at (1.0, 1.0) and the center is at (0, 0).
  • If the coordinates are not normalized they are interpreted as numbers of pixels.
Args:
  • input: A Tensor of type float32. A 4-D float tensor of shape [batch_size, height, width, channels].
  • size: A Tensor of type int32. A 1-D tensor of 2 elements containing the size of the glimpses to extract. The glimpse height must be specified first, following by the glimpse width.(H, W)
  • offsets: A Tensor of type float32. A 2-D integer tensor of shape [batch_size, 2] containing the (y, x) locations of the center of each window. 
`by using tf.reshape(offsets, [1, 2]) to get the needed shape
  • centered: An optional bool. Defaults to True. indicates if the offset coordinates are centered relative to the image, in which case the (0, 0) offset is relative to the center of the input images. If false, the (0,0) offset corresponds to the upper left corner of the input images.
  • normalized: An optional bool. Defaults to True. indicates if the offset coordinates are normalized.
  • uniform_noise: An optional bool. Defaults to True. indicates if the noise should be generated using a uniform distribution or a gaussian distribution.
  • name: A name for the operation (optional).
Returns:

Tensor of type float32. A tensor representing the glimpses [batch_size, glimpse_height, glimpse_width, channels].


tf.extract_image_patches(images,ksizes, strides,rates,padding,name=None)

Defined in tensorflow/python/ops/gen_array_ops.py.

See the guide: Tensor Transformations > Slicing and Joining

Extract patches from images and put them in the "depth" output dimension.

Args:

  • images: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.4-D Tensor with shape [batch, in_rows, in_cols, depth].
  • ksizes: A list of ints that has length >= 4.The size of the sliding window for each dimension of images.
  • strides: A list of ints that has length >= 4.1-D of length 4. How far the centers of two consecutive patches are inthe images. Must be: [1, stride_rows, stride_cols, 1].
  • rates: A list of ints that has length >= 4.1-D of length 4. Must be: [1, rate_rows, rate_cols, 1]. This is theinput stride, specifying how far two consecutive patch samples are in theinput. Equivalent to extracting patches withpatch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed bysubsampling them spatially by a factor of rates. This is equivalent torate in dilated (a.k.a. Atrous) convolutions.
  • padding: A string from: "SAME", "VALID".The type of padding algorithm to use.

    We specify the size-related attributes as:

          ksizes = [1, ksize_rows, ksize_cols, 1]
          strides = [1, strides_rows, strides_cols, 1]
          rates = [1, rates_rows, rates_cols, 1]
    
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as images.



tf.image.crop_and_resize(image, boxes, box_ind, crop_size, method=None, extrapolation_value=None, name=None)

Extracts crops from the input image tensor and bilinearly resizes them (possibly

with aspect ratio change) to a common output size specified by crop_size. This is more general than the crop_to_bounding_box op which extracts a fixed size slice from the input image and does not allow resizing or aspect ratio change.

Returns a tensor with crops from the input image at positions defined at the bounding box locations in boxes. The cropped boxes are all resized (with bilinear interpolation) to a fixed size = [crop_height, crop_width]. The result is a 4-D tensor [num_boxes, crop_height, crop_width, depth].

Args:
  • image: A Tensor. Must be one of the following types: uint8int8int16int32int64halffloat32float64. A 4-D tensor of shape [batch, image_height, image_width, depth]. Both image_height and image_width need to be positive.
  • boxes: A Tensor of type float32. A 2-D tensor of shape [num_boxes, 4]. The i-th row of the tensor specifies the coordinates of a box in the box_ind[i] image and is specified in normalized coordinates [y1, x1, y2, x2]. A normalized coordinate value of y is mapped to the image coordinate at y * (image_height - 1), so as the[0, 1] interval of normalized image height is mapped to [0, image_height - 1] in image height coordinates. We do allow y1 > y2, in which case the sampled crop is an up-down flipped version of the original image. The width dimension is treated similarly. Normalized coordinates outside the[0, 1]range are allowed, in which case we useextrapolation_value` to extrapolate the input image values.
  • box_ind: A Tensor of type int32. A 1-D tensor of shape [num_boxes] with int32 values in [0, batch). The value of box_ind[i] specifies the image that the i-th box refers to.
  • crop_size: A Tensor of type int32. A 1-D tensor of 2 elements, size = [crop_height, crop_width]. All cropped image patches are resized to this size. The aspect ratio of the image content is not preserved. Both crop_height and crop_width need to be positive.
  • method: An optional string from: "bilinear". Defaults to "bilinear". A string specifying the interpolation method. Only 'bilinear' is supported for now.
  • extrapolation_value: An optional float. Defaults to 0. Value used for extrapolation, when applicable.
  • name: A name for the operation (optional).
Returns:

Tensor of type float32. A 4-D tensor of shape [num_boxes, crop_height, crop_width, depth].

你可能感兴趣的:(TensorFlow从图像中提取区域)