Consider the following algorithm to compute the sum of two matrices.

def matrix_sum(myMatrix1, myMatrix2):
  rows_matrix1 = number of rows in myMatrix1
  rows_matrix2 = number of rows in muMatrix2
  cols_matrix1 = number of columns in myMatrix1
  cols_matrix2 = number of columns in muMatrix2

  if (rows_matrix1 not equal to rows_matrix2) or (cols_matrix1 not equal to cols_matrix2):
    print("Warning: matrix dimensions don't match")
    return NULL

  sumMatrix = a rows_matrix1 x cols_matrix1 array

  for row in [0, 1, ..., rows - 1]:
    for column in [0, 1, ..., columns - 1]:
      sumMatrix[row, column] = myMatrix1[row, column] + myMatrix2[row, column]

  return sumMatrix


Let `n` denote the number of rows in each input matrix and `m` denote the number of columns. What is the run-time complexity of the alrorithm, using big-O notation?