In this letter, we study distributed optimization, where a network of agents, interacting over directed graphs, collaborates to minimize the sum of locally-known convex functions. The existing approaches over directed graphs are based on push-sum techniques, which use additional variables to asymptotically learn either the left or the right eigenvector of the underlying weight matrices. This strategy causes additional computation, communication, and nonlinearity in the algorithm. In contrast, we propose a linear algorithm based on the inexact gradient method and a gradient estimation technique. Under the assumption that each local function is strongly-convex and has Lipschitz-continuous gradients, we show that the proposed algorithm geometrically converges to the global minimizer with a sufficiently small step-size and when the condition number of the global objective function follows a bound related to the non-doubly stochastic weights. We present simulations to illustrate the theoretical findings.