Let $\omega = e^{2\pi i/n}$ and define $n\times n$ matrices $\Omega, \overline{\Omega}$ with $(j,k)$ entries $\omega^{j+k}$ and $\omega^{-(j+k)}$ respectively, so that $$\Omega = \begin{bmatrix}1&\omega &\omega^2 &\ldots &\omega^{n-1} \\ \omega &\omega^2 &\omega^3 &\ldots &1 \\ \vdots& \vdots& \vdots& \ddots \\ \omega^{n-1} &1&\omega &\ldots &\omega^{n-2} \end{bmatrix}, \qquad \overline{\Omega} = \begin{bmatrix}1&\omega^{-1} &\omega^{-2} &\ldots &\omega^{-(n-1)} \\ \omega^{-1} &\omega^{-2} &\omega^{-3} &\ldots &1 \\ \vdots& \vdots& \vdots& \ddots \\ \omega^{-(n-1)} &1&\omega^{-1} &\ldots &\omega^{-(n-2)} \end{bmatrix}.$$
Note: I am numbering the rows and columns from $0$ to $n-1$, so that the top left-hand element of each matrix is $\omega^0 =1$. If the rows and columns are numbered from $1$ to $n$ then the top left-hand elements would be $\omega^2$ and $\omega^{-2}$, which seems less natural. But the numbering convention does not affect the answer to the problem, which would be the same in either case.
Both matrices $\Omega, \overline{\Omega}$ have rank $1$, because in each case every column is a scalar multiple of the first column. Denote these first columns by $e_1 = (1,\omega ,\omega^2,\ldots ,\omega^{n-1})$, $e_2 = (1,\omega^{-1} ,\omega^{-2},\ldots ,\omega^{-(n-1)})$ (writing them as row vectors for convenience, although they are really column vectors). Let $V$ be the two-dimensional subspace of $\Bbb{C}^n$ spanned by $e_1$ and $e_2$. The linear transformations represented by the matrices $\Omega, \overline{\Omega}$ both have range in $V$. Denote by $\Omega\big| _V, \overline{\Omega}\big|_V$ their restrictions to $V$. Using the facts that $$\sum_{k=0}^{n-1}\omega^k\omega^{-k} = n$$ and $$\sum_{k=0}^{n-1}\omega^{2k} = 0$$, you can check that $\Omega\big| _V(e_1) = 0$, $\Omega\big| _V(e_2) = ne_1$, $\overline{\Omega}\big| _V(e_1) = ne_2$ and $\overline{\Omega}\big| _V(e_2) = 0.$ So the matrices of these transformations with respect to the basis $\{e_1,e_2\}$ of $V$ are $$ \Omega\big| _V = \begin{bmatrix}0&n\\0&0 \end{bmatrix},\qquad \overline{\Omega}\big| _V = \begin{bmatrix}0&0\\n&0 \end{bmatrix}.$$
Turning now to the given matrix $A$, notice that $A = \frac12(\Omega + \overline{\Omega})$. Therefore the restriction of $A$ to $V$ has matrix $\frac n2 \begin{bmatrix}0&1\\1&0 \end{bmatrix}$ (with respect to the basis $\{e_1,e_2\}$). The eigenvalues of $ \begin{bmatrix}0&1\\1&0 \end{bmatrix}$ are $\pm1$. Also, $A$ has rank $2$, so it can only have two nonzero eigenvalues. It follows that the eigenvalues of $A$ are $0$ (with multiplicity $n-2$) and $\pm\frac n2.$ Therefore the eigenvalues of $I+A$ are $1$ (with multiplicity $n-2$) and $1\pm\frac n2.$ Since the determinant is the product of the eigenvalues, the conclusion is that $\det A = \bigl(1+\frac n2\bigr)\bigl(1-\frac n2\bigr) = 1 - \frac{n^2}4.$