【已解决】AttributeError: Can‘t get attribute ‘C3k‘ on <module ‘ultralytics.nn.modules.block‘ from ‘/root_attributeerror: can\'t get attribute \'c3k2\' on <mod
在使用命令进行验证测试集出现问题如下:
问题1:
AttributeError: Can\'t get attribute \'C3k\' on
解决方案:
class C3k(C3): \"\"\"C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks.\"\"\" def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3): \"\"\"Initializes the C3k module with specified channels, number of layers, and configurations.\"\"\" super().__init__(c1, c2, n, shortcut, g, e) c_ = int(c2 * e) # hidden channels # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n))) self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
问题2:
AttributeError: Can\'t get attribute \'C3k2\' on
解决方案:
根据报错内容找到出新问题的py文件,添加如下代码
class C3k2(C2f): \"\"\"Faster Implementation of CSP Bottleneck with 2 convolutions.\"\"\" def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True): \"\"\"Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks.\"\"\" super().__init__(c1, c2, n, shortcut, g, e) self.m = nn.ModuleList( C3k(self.c, self.c, 2, shortcut, g) if c3k else Bottleneck(self.c, self.c, shortcut, g) for _ in range(n) )
问题3:
AttributeError: Can\'t get attribute \'C2PSA\' on
解决方案:
根据报错内容找到出新问题的py文件,添加如下代码
class C2PSA(nn.Module): \"\"\" C2PSA module with attention mechanism for enhanced feature extraction and processing. This module implements a convolutional block with attention mechanisms to enhance feature extraction and processing capabilities. It includes a series of PSABlock modules for self-attention and feed-forward operations. Attributes: c (int): Number of hidden channels. cv1 (Conv): 1x1 convolution layer to reduce the number of input channels to 2*c. cv2 (Conv): 1x1 convolution layer to reduce the number of output channels to c. m (nn.Sequential): Sequential container of PSABlock modules for attention and feed-forward operations. Methods: forward: Performs a forward pass through the C2PSA module, applying attention and feed-forward operations. Notes: This module essentially is the same as PSA module, but refactored to allow stacking more PSABlock modules. Examples: >>> c2psa = C2PSA(c1=256, c2=256, n=3, e=0.5) >>> input_tensor = torch.randn(1, 256, 64, 64) >>> output_tensor = c2psa(input_tensor) \"\"\" def __init__(self, c1, c2, n=1, e=0.5): \"\"\"Initializes the C2PSA module with specified input/output channels, number of layers, and expansion ratio.\"\"\" super().__init__() assert c1 == c2 self.c = int(c1 * e) self.cv1 = Conv(c1, 2 * self.c, 1, 1) self.cv2 = Conv(2 * self.c, c1, 1) self.m = nn.Sequential(*(PSABlock(self.c, attn_ratio=0.5, num_heads=self.c // 64) for _ in range(n))) def forward(self, x): \"\"\"Processes the input tensor \'x\' through a series of PSA blocks and returns the transformed tensor.\"\"\" a, b = self.cv1(x).split((self.c, self.c), dim=1) b = self.m(b) return self.cv2(torch.cat((a, b), 1))
问题4:
AttributeError: Can\'t get attribute \'PSABlock\' on
解决方案:
根据报错内容找到出新问题的py文件,添加如下代码
class PSABlock(nn.Module): \"\"\" PSABlock class implementing a Position-Sensitive Attention block for neural networks. This class encapsulates the functionality for applying multi-head attention and feed-forward neural network layers with optional shortcut connections. Attributes: attn (Attention): Multi-head attention module. ffn (nn.Sequential): Feed-forward neural network module. add (bool): Flag indicating whether to add shortcut connections. Methods: forward: Performs a forward pass through the PSABlock, applying attention and feed-forward layers. Examples: Create a PSABlock and perform a forward pass >>> psablock = PSABlock(c=128, attn_ratio=0.5, num_heads=4, shortcut=True) >>> input_tensor = torch.randn(1, 128, 32, 32) >>> output_tensor = psablock(input_tensor) \"\"\" def __init__(self, c, attn_ratio=0.5, num_heads=4, shortcut=True) -> None: \"\"\"Initializes the PSABlock with attention and feed-forward layers for enhanced feature extraction.\"\"\" super().__init__() self.attn = Attention(c, attn_ratio=attn_ratio, num_heads=num_heads) self.ffn = nn.Sequential(Conv(c, c * 2, 1), Conv(c * 2, c, 1, act=False)) self.add = shortcut def forward(self, x): \"\"\"Executes a forward pass through PSABlock, applying attention and feed-forward layers to the input tensor.\"\"\" x = x + self.attn(x) if self.add else self.attn(x) x = x + self.ffn(x) if self.add else self.ffn(x) return x