Network Module¶
This module holds the implementation of all the networks, their losses, and their components
-
class
stransfer.network.
ContentLoss
(target)[source]¶ Implementation of the content loss
- Parameters
target (
Tensor
) – the target image we want to use to calculate the content loss
-
class
stransfer.network.
FeatureReconstructionLoss
(target)[source]¶ Implementation of the feature reconstruction loss.
- ..note::
this loss is currently not used since it doesn’t seem to provide much improvement over the normal
ContentLoss
-
class
stransfer.network.
ImageTransformNet
(style_image, batch_size=4)[source]¶ This the implementation of the fast style transform, image transform network, as defined in:
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
- Parameters
style_image (
Tensor
) – The image we want to use as as the style referencebatch_size – the size of the batch
-
get_total_variation_regularization_loss
(transformed_image, regularization_factor=1e-06)[source]¶ Calculate a regularization loss, which will tell us how ‘noisy’ is the current image. Penalize if it is very noisy. See: https://en.wikipedia.org/wiki/Total_variation_denoising#2D_signal_images
-
process_image
(image_path, style_name='nsp', out_dir='results/')[source]¶ Processes a given input image at image_path with a network pretrained on the style style_name.
Saves the processed image to out_dir
- Parameters
image_path (
str
) – path to the image we want to stylizestyle_name – name of the style we want to apply to the image. Note that a pretrained model with said style must exist in data/models/
out_dir – directory were the stylized image will be saved
- Return type
None
-
class
stransfer.network.
StyleLoss
(target)[source]¶ Implementation of the style loss
- Parameters
target (
Tensor
) – the tensor representing the style image we want to take as reference during training
-
class
stransfer.network.
StyleNetwork
(style_image, content_image=None)[source]¶ Implementation of the StyleNetwork as defined in
A Neural Algorithm of Artistic Style - Gatys (2015)
- Parameters
-
forward
(input_image, content_image=None, style_image=None)[source]¶ Given an input image pass it through all layers in the network
- Parameters
input_image (
Tensor
) – the image to pass through the networkcontent_image – if specified then this will change the curret target for the content loss
style_image – if specified then this will change the current target for the style loss
- Return type
None
-
get_total_current_content_loss
(weight=1)[source]¶ Returns the sum of all the loss present in all content nodes
- Return type
-
get_total_current_feature_loss
(weight=1)[source]¶ Returns the sum of all the loss present in all content nodes
- Return type
-
get_total_current_style_loss
(weight=1)[source]¶ Returns the sum of all the loss present in all style nodes
- Return type
-
class
stransfer.network.
VideoTransformNet
(style_image, batch_size=4, fast_transfer_dict=None)[source]¶ Implementation of the video transform net.
- Parameters
style_image (
Tensor
) – image we’ll use as style referencebatch_size – size of the batch
fast_transfer_dict – state dict from a pretrained ‘fast style network’. It allows us to start training the video model from this, which allows to bootstrap training. It is recommended to do this since the current video set is not very big.
-
get_temporal_loss
(old_content, old_stylized, current_content, current_stylized, temporal_weight=1)[source]¶ Calculates the temporal loss See https://github.com/tupini07/StyleTransfer/issues/5
- Parameters
old_content – tensor representing the content of the previous frame
old_stylized – tensor representing the stylized previous frame
current_content – tensor representing the content of the current frame
current_stylized – tensor representing the stylized current frame
temporal_weight – weight for the temporal loss
- Return type
- Returns
the temporal loss
-
process_video
(video_path, style_name='nsp', working_dir='workdir/', out_dir='results/', fps=24.0)[source]¶ Applies style to a single video, using pretrained weights. Note that the weights must exist, if not an exception will be raised.
- Parameters
video_path (
str
) – the path of the video to stylizestyle_name – the name of the style to apply to the video. The weights for a video transform model using said style must exist in data/models/
working_dir – directory where the transformed frames will be saved
out_dir – directory where the final transformed video will be saved
fps – the frames per second to use in the final video
-
video_train
(style_name='nsp', epochs=50, temporal_weight=0.8, style_weight=100000, feature_weight=1, content_weight=1)[source]¶ Trains the video network
- Parameters
style_name – the name of the style (used for saving and loading checkpoints)
epochs – how many epochs should the training go through
temporal_weight – the weight for the temporal loss
style_weight – the weight for the style loss
feature_weight – the weight for the feature loss
content_weight – the weight for the content loss
- Return type
None
- Returns