The Ethernet switch is a primary building block for today¡¯s enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efﬁciency of the switch while maintaining ﬂexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide ﬂexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match ﬂows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacriﬁcing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the ﬂow classiﬁcation of incoming packets. When predictions are correct, latency can be reduced, and signiﬁcant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.