Skip to main content
Fig. 2 | Diagnostic Pathology

Fig. 2

From: Translating prognostic quantification of c-MYC and BCL2 from tissue microarrays to whole slide images in diffuse large B-cell lymphoma using deep learning

Fig. 2

Overview of TMA processing and attention mechanism. (a) Each TMA is split up into small patches (instances). Each patch is passed through a pre-trained ResNet50. In different experimental settings, different levels of features are extracted from each patch. The first, second, third, and fourth residual blocks of ResNet50 yield 256-, 512-, 1024-, and 2048- dimensional embeddings, respectively, after spatial averaging. Each progressive block corresponds to more complex features. Finally, an AB-MIL is trained on these embeddings to regress the TMA c-MYC or BCL2 score. (b) The gated attention mechanism passes each embedding through parallel layers of the network (V and U) and is activated by tanh and sigmoid activation functions, respectively. The resulting parallel activations are dot-multiplied and passed through a final fully connected layer (wT), which maps the vector into a single value, its raw attention weight. These raw weights are scaled via softmax to weight attention weights

Back to article page