Pantomimes are a unique movement category which can convey complex information about our intentions in the absence of any interaction with real objects. Indeed, we can pretend to use the same tool to perform different actions or to achieve the same goal adopting different tools. Nevertheless, how our brain implements pantomimed movements is still poorly understood. In our study, we explored the neural encoding and functional interactions underlying pantomimes adopting multivariate pattern analysis (MVPA) and connectivity analysis of fMRI data. Participants performed pantomimed movements, either grasp-to-move or grasp-to-use, as if they were interacting with two different tools (scissors or axe). These tools share the possibility to achieve the same goal. We adopted MVPA to investigate two levels of representation during the planning and execution of pantomimes: (1) distinguishing different actions performed with the same tool, (2) representing the same final goal irrespective of the adopted tool. We described widespread encoding of action information within regions of the so-called “tool” network. Several nodes of the network—comprising regions within the ventral and the dorsal stream—also represented goal information. The spatial distribution of goal information changed from planning—comprising posterior regions (i.e. parietal and temporal)—to execution—including also anterior regions (i.e. premotor cortex). Moreover, connectivity analysis provided evidence for task-specific bidirectional coupling between the ventral stream and parieto-frontal motor networks. Overall, we showed that pantomimes were characterized by specific patterns of action and goal encoding and by task-dependent cortical interactions.